datasetId
stringlengths
2
81
card
stringlengths
20
977k
uitnlp/OpenViVQA-dataset
--- license: mit task_categories: - visual-question-answering language: - vi size_categories: - 10K<n<100K --- OpenViVQA: Open-domain Vietnamese Visual Question Answering ===== ![examples](data_examples.png) The OpenViVQA dataset contains <b>11,000+</b> images with <b>37,000+</b> question-answer pairs which introduces the Text-based Open-ended Visual Question Answering in Vietnamese. This dataset is publicly available to the research community in the VLSP 2023 - ViVRC shared task challenge. You can access the dataset as well as submit your results to evaluate on the private test set on the [Codalab](https://codalab.lisn.upsaclay.fr/competitions/15212#participate) evaluation system. Link to the OpenViVQA dataset: - [Train images](train-images.zip) + [train annotations](vlsp2023_train_data.json). - [Dev images](dev-images.zip) + [dev annotations](vlsp2023_dev_data.json). - [Test images](test-images.zip) + [test annotations (without answers)](vlsp2023_test_data.json). If you mention or use any information from our dataset, please cite our paper: ``` @article{NGUYEN2023101868, title = {OpenViVQA: Task, dataset, and multimodal fusion models for visual question answering in Vietnamese}, journal = {Information Fusion}, volume = {100}, pages = {101868}, year = {2023}, issn = {1566-2535}, doi = {https://doi.org/10.1016/j.inffus.2023.101868}, url = {https://www.sciencedirect.com/science/article/pii/S1566253523001847}, author = {Nghia Hieu Nguyen and Duong T.D. Vo and Kiet {Van Nguyen} and Ngan Luu-Thuy Nguyen}, keywords = {Visual question answering, Vision-language understanding, Low-resource languages, Information fusion, Multimodal representation}, abstract = {In recent years, visual question answering (VQA) has attracted attention from the research community because of its highly potential applications (such as virtual assistance on intelligent cars, assistant devices for blind people, or information retrieval from document images using natural language as queries) and challenge. The VQA task requires methods that have the ability to fuse the information from questions and images to produce appropriate answers. Neural visual question answering models have achieved tremendous growth on large-scale datasets which are mostly for resource-rich languages such as English. However, available datasets narrow the VQA task as the answers selection task or answer classification task. We argue that this form of VQA is far from human ability and eliminates the challenge of the answering aspect in the VQA task by just selecting answers rather than generating them. In this paper, we introduce the OpenViVQA (Open-domain Vietnamese Visual Question Answering) dataset, the first large-scale dataset for VQA with open-ended answers in Vietnamese, consists of 11,000+ images associated with 37,000+ question–answer pairs (QAs). Moreover, we proposed FST, QuMLAG, and MLPAG which fuse information from images and questions, then use these fused features to construct answers as humans iteratively. Our proposed methods achieve results that are competitive with SOTA models such as SAAA, MCAN, LORA, and M4C. The dataset11https://github.com/hieunghia-pat/OpenViVQA-dataset. is available to encourage the research community to develop more generalized algorithms including transformers for low-resource languages such as Vietnamese.} } ``` ### Contact This repository was constructed under the instruction of the [NLP@UIT Research Group](https://nlp.uit.edu.vn/). For more information, contact the following author: 1. Nghia Hieu Nguyen. Email: nghiangh@uit.edu.vn
Naomibas/llm-system-prompts-benchmark
--- license: apache-2.0 language: - en pretty_name: 100 system prompts for benchmarking large language models size_categories: - n<1K --- # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This datset is a collection of 100 system prompts for large language models. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> These 100 system prompts test a model's ability to follow grammatical patterns; answer basic multiple choice questions; act according to a particular persona; memorize information; and speak in French. Files: - **hundred_system_prompts.py**: refer to this to see the (prompt, probe, function) triplets, as well as the helper functions. - **hundred_system_prompts.json**: this is purely for display purposes. - **run_benchmark.py**: this runs the 100 tests on a model, without any context other than the system prompt and the probe. - **create_json_file.py**: a small file that was used to create the **hundred_system_prompts.py** file. More info: - **Curated by:** Naomi Bashkansky - **Language(s) (NLP):** en - **License:** apache-2.0 ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/likenneth/persona - **Paper:** Forthcoming. ## Uses A benchmark for large language models: how good are LLMs at following a system prompt? Tests both basic capabilities (is a model able to follow the system prompt) and basic alignment (does a model that *can* follow the system prompt do so). Can be used to compare different models, or to help in performing interventions on a model to make it better at following system prompts. ### Direct Use <!-- This section describes suitable use cases for the dataset. --> This dataset is released open source. Researchers are especially encouraged to use this dataset. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> "prompt" is given as a system prompt to a large language model. "probe" is given as a user inquiry; its purpose it to elicit a response that allows us to check if the LLM is following the system prompt. "function" checks whether the LLM's response to the probe follows the system prompt; it returns a number from 0 (not following) to 1 (following). ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> There exists no benchmark of system prompts. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> Process: thinking of system prompts, probes, and testing functions. Running the system prompts on GPT-4 to check GPT-4 is (mostly) able to follow them. Testing functions are in Python. #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> Naomi Bashkansky made most of the system prompts, and Kenneth Li made the rest. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> No. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Limitation: as models become more capable, this benchmark may become outdated/too easy. The ideal benchmark is one that tests the model's alignment - its propensity toward following the system prompt - rather than its ability to do so. Bias: this datset is only in English, with the exception of three French prompts. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** Forthcoming. **APA:** Forthcoming. ## Dataset Card Authors Naomi Bashkansky, Kenneth Li ## Dataset Card Contact naomibashkansky@college.harvard.edu, ke_li@g.harvard.edu
Jaymax/FDA_Pharmaceuticals_FAQ
--- configs: - config_name: default data_files: - split: train path: train.csv - split: validation path: validation.csv - split: test path: test.csv dataset_info: features: - name: Question dtype: string - name: Answer dtype: string splits: - name: train num_bytes: 1467644 num_examples: 1433 - name: validation num_bytes: 170537 num_examples: 169 - name: test num_bytes: 82830 num_examples: 79 download_size: 1721011 dataset_size: 1721011 license: cc-by-4.0 --- # FDA Pharmaceutical Q&A Dataset ## Description This dataset contains a collection of question-and-answer pairs related to pharmaceutical regulatory compliance provided by the Food and Drug Administration (FDA). It is designed to support research and development in the field of natural language processing, particularly for tasks involving information retrieval, question answering, and conversational agents within the pharmaceutical domain. ## Dataset Structure The dataset consists of structured Q&A pairs ### Data Fields - `question`: The question text, beginning with a citation indicating the source document. - `answer`: The corresponding answer provided, as per the FDA guidance. ### Data Splits The dataset is partitioned into training, validation, and testing sets to support a standard machine learning workflow. ### Source Data The Q&A pairs were extracted from official FDA documents that are publicly accessible. Each question contains a citation referencing its source document to ensure traceability and provide context. The data was compiled with the assistance of the ChatGPT-3.5 Turbo model. It is important to note that the dataset reflects the information available up to the date of collection. The dataset may not encompass updates or documents released subsequent to that date, and users are advised to check for the most recent information when using the data for time-sensitive applications. ## Licensing This dataset is compiled in accordance with the FDA's commitment to ensuring accessibility for all individuals, as outlined on their accessibility webpage. Users must ensure that any utilization of this dataset adheres to these principles, particularly the guidelines under Section 508 of the Rehabilitation Act, which mandate accessible Information and Communication Technology (ICT). For more information, please refer to [Accessibility @ FDA](https://www.fda.gov/about-fda/about-website/accessibility-fda). ## Citation When referencing this dataset in your research or applications, please cite it as follows: Kim, J., & Min, M. (2024). From RAG to QA-RAG: Integrating Generative AI for Pharmaceutical Regulatory Compliance Process. arXiv preprint arXiv:2402.01717. ## Contact For any inquiries regarding this dataset, please contact [rlawodnd1127@gmail.com].
silk-road/Haruhi-Zero-RolePlaying-movie-PIPPA
--- license: apache-2.0 task_categories: - text-generation language: - zh size_categories: - 1K<n<10K --- # 2000 Chinese RoleCards from IMDB_250 Movies and PIPPA 用于拓展zero-shot角色扮演的角色卡片。 其中870个角色来自电影字幕总结(id为movie_xx),其中406张翻译成了简体中文,剩下的没翻(所以有些繁体或者英文混杂) 1270个角色来自于对PIPPA数据集的翻译 - [凌云志](https://github.com/Kirovsiki)@伯恩茅斯大学 使用射手api爬取了电影的字幕 - 李鲁鲁 完成了从字幕到角色卡片的总结,以及对数据的翻译(openai) # 后续 我们后续打算用这些卡片 从openai, CharacterGLM, KoboldAI的api中,利用Baize的方式去获得数据。 项目主页 https://github.com/LC1332/Chat-Haruhi-Suzumiya 如果你要讨论加入我们的项目 可以把你的联系方式私信发给 https://www.zhihu.com/people/cheng-li-47
nuprl/EditPackFT-Multi
--- dataset_info: features: - name: commit dtype: string - name: old_file dtype: string - name: new_file dtype: string - name: old_contents dtype: string - name: new_contents dtype: string - name: subject dtype: string - name: message dtype: string - name: lang dtype: string - name: license dtype: string - name: repos dtype: string - name: config dtype: string - name: content dtype: string splits: - name: train num_bytes: 1122708281.0324206 num_examples: 306133 download_size: 514249902 dataset_size: 1122708281.0324206 configs: - config_name: default data_files: - split: train path: data/train-* --- Multilingual version of https://huggingface.co/datasets/nuprl/EditPackFT ## Citation If you use our work, please cite our paper as such: ``` @misc{cassano2023edit, title={Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions}, author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha}, year={2023}, eprint={2312.12450}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
rxavier/economicus
--- license: mit task_categories: - text-generation - question-answering language: - en tags: - economics - economy - sharegpt pretty_name: economicus size_categories: - 10K<n<100K --- # Economicus dataset ## Introduction Economicus is a synthetic dataset of graduate-level economics questions and answers generated by Mixtral 8x7B. This is a single turn dataset of almost 13 million tokens. ## Dataset structure ``` { "id": 27, "source_id": "modern_economic_growth", "method": "grounded_qa_generation", "conversations": [ { "from": "human", "value": "How do institutional differences affect economic growth?" }, { "from": "gpt", "value": "Institutional differences affect economic growth by shaping the incentives individuals and firms have for accumulation, investment, and trade. Institutions can either promote or hinder economic growth, depending on the laws and regulations under which individuals and firms function." } ] } ``` ## Creation process Economicus q-a pairs are based on text extracts from these sources: * Economic Growth 2nd ed (Barro & Sala-i-Martin) * Recursive Macroeconomic Theory (Sargent & Ljungqvist) * Advanced International Trade: Theory and Evidence (Feenstra) * Advanced Macroeconomics 5th ed (Romer) * Microeconomic Foundations: Choice and Competitive Markets (Kreps) * Mostly Harmless Econometrics: An Empiricist's Companion (Angrist & Pischke) * Microeconomic Theory (Mas-Colell, Whinston & Green) * Introduction to Modern Economic Growth (Acemoglu) * Econometric Analysis of Cross Section and Panel Data (Wooldridge) * Econometrics (Hayashi) * The Economics of Growth (Aghion & Howitt) * Interest and Prices (Woodford) * Labor Markets and Business Cycles (Shimer) * Monetary Theory and Policy (Walsh) * Open Economy Macroeconomics (Uribe & Schmitt-Grohé) * Mathematical Methods and Models for Economists (de la Fuente) * A Course in Game Theory (Osborne & Rubinstein) * A First Course in Optimization Theory (Sundaram) * Lectures on Macroeconomics (Blanchard & Fischer) * Fundamental Methods in Mathematical Economics (Chiang & Wainwright) * Dynamic Economics: Quantitative Methods and Applications (Adda & Cooper) They were parsed using [marker](https://github.com/VikParuchuri/marker), which creates nicely formatted sections. These sections were joined to form groups of minimum 2048 tokens (minus a 15% tolerance). Each of these groups was used in the following prompt. ``` You are building a compendium of statements or questions for Economics PhD students to solve that will be used in tests and exams. Generate up to {n_questions} diverse questions. Use the book extract provided at the end of this prompt as a reference. Make questions appropiate for graduate-level students. Be varied with question formats. Students answering these questions will not have access to the book that contains the extract, so do not mention anything like page numbers, section numbers or titles, chapter numbers or titles, equation numbers, theorem numbers, proposition numbers or exercise numbers. Conform to this JSON schema: [{{"input": "a statement or question"}}, {{"input": "a statement or question"}}, ...] You can only output valid JSON. The only valid key is "input". ### Extract (from {title} by {author}): {extract} ``` Then, each question is sent to the model along with the extract from where it was generated: ``` Below is a statement or question for an economics PhD student. Please provide a detailed and complete answer to the question. The answer should be long and elaborate, and should include as much information as possible relating to the input, including your own knowledge. Use LaTeX notation for equations and symbols. Do not mention anything specific to the extract. Do not talk about the extract. Do not mention anything like page numbers, section numbers or titles, chapter numbers or titles, equation numbers, theorem numbers, proposition numbers or exercise numbers. The question is based on the following extract from the book {title} by {author}: ### Extract: {extract} ### Question: {question} ``` The dataset is preprocessed to remove as many specific references to the extract as possible ("Explain figure 7.1" for example), because my prompt-fu is not great. Also, instances of `"the text"` (like "Explain the model A in the text.") instances are replaced by `"{title} by {author}"`.
HuggingFaceH4/orca_dpo_pairs_no_system_prompt
--- dataset_info: features: - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string - name: prompt dtype: string splits: - name: train_prefs num_bytes: 52139704 num_examples: 12359 - name: test_prefs num_bytes: 2009711 num_examples: 500 download_size: 30509701 dataset_size: 54149415 configs: - config_name: default data_files: - split: train_prefs path: data/train_prefs-* - split: test_prefs path: data/test_prefs-* ---
2A2I/H4_no_robots
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: prompt dtype: string - name: prompt_id dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string - name: category dtype: string splits: - name: train num_bytes: 16496867 num_examples: 9500 - name: test num_bytes: 887460 num_examples: 500 download_size: 11045465 dataset_size: 17384327 task_categories: - text-generation language: - ar pretty_name: لا روبوتات license: cc-by-nc-4.0 --- ### Dataset Card for "No Robots" 🙅‍♂️🤖 #### Summary "No Robots" is a dataset consisting of 10,000 instructions and demonstrations, created by professional annotators. It was translated using the Google Cloud Platform Translation API. This dataset can be used to train language models to follow instructions more accurately (instruction-tuned fine-tuning - SFT). The "No Robots" dataset was created based on the dataset described in OpenAI's [InstructGPT](https://huggingface.co/papers/2203.02155) paper, and includes the following categories: | Category | Count | |-------------------|------:| | Creation | 4560 | | Open Questions | 1240 | | Brainstorming | 1120 | | Chatting | 850 | | Rewriting | 660 | | Summarization | 420 | | Programming | 350 | | Classification | 350 | | Closed Questions | 260 | | Extraction | 190 | #### Languages This dataset is available in Arabic only. The original version in **English** can be found at [this link](https://huggingface.co/datasets/HuggingFaceH4/no_robots), and the **Turkish** version at [this link](https://huggingface.co/datasets/merve/tr-h4-norobots). #### Data Fields Columns as follows: * `prompt`: Specifies the instruction that the model should follow. * `prompt_id`: A unique identifier. * `messages`: A list containing dictionaries, each dictionary describes a message (key: content) and who sent it (key: role). * `category`: The task category, I did not translate this. #### Splits | | train | test | |------------------|------:|-----:| | No Robots | 9500 | 500 | #### License The dataset is available under the [(CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode) license. #### Citation Information ``` @misc{no_robots, author = {Nazneen Rajani and Lewis Tunstall and Edward Beeching and Nathan Lambert and Alexander M. Rush and Thomas Wolf}, title = {No Robots}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/datasets/HuggingFaceH4/no_robots}} } ```
aai530-group6/ddxplus
--- language: - en license: cc-by-4.0 license_link: https://creativecommons.org/licenses/by/4.0/ tags: - automatic-diagnosis - automatic-symptom-detection - differential-diagnosis - synthetic-patients - diseases - health-care pretty_name: DDXPlus size_categories: - 1K<n<10K source_datasets: - original task_categories: - tabular-classification task_ids: - multi-class-classification paperswithcode_id: ddxplus configs: - config_name: default data_files: - split: train path: "train.csv" - split: test path: "test.csv" - split: validate path: "validate.csv" extra_gated_prompt: "By accessing this dataset, you agree to use it solely for research purposes and not for clinical decision-making." extra_gated_fields: Consent: checkbox Purpose of use: type: select options: - Research - Educational - label: Other value: other train-eval-index: - config: default task: medical-diagnosis task_id: binary-classification splits: train_split: train eval_split: validate col_mapping: AGE: AGE SEX: SEX PATHOLOGY: PATHOLOGY EVIDENCES: EVIDENCES INITIAL_EVIDENCE: INITIAL_EVIDENCE DIFFERENTIAL_DIAGNOSIS: DIFFERENTIAL_DIAGNOSIS metrics: - type: accuracy name: Accuracy - type: f1 name: F1 Score --- # Dataset Description We are releasing under the CC-BY licence a new large-scale dataset for Automatic Symptom Detection (ASD) and Automatic Diagnosis (AD) systems in the medical domain. The dataset contains patients synthesized using a proprietary medical knowledge base and a commercial rule-based AD system. Patients in the dataset are characterized by their socio-demographic data, a pathology they are suffering from, a set of symptoms and antecedents related to this pathology, and a differential diagnosis. The symptoms and antecedents can be binary, categorical and multi-choice, with the potential of leading to more efficient and natural interactions between ASD/AD systems and patients. To the best of our knowledge, this is the first large-scale dataset that includes the differential diagnosis, and non-binary symptoms and antecedents. **Note**: We use evidence as a general term to refer to a symptom or an antecedent. This directory contains the following files: - **release_evidences.json**: a JSON file describing all possible evidences considered in the dataset. - **release_conditions.json**: a JSON file describing all pathologies considered in the dataset. - **release_train_patients.zip**: a CSV file containing the patients of the training set. - **release_validate_patients.zip**: a CSV file containing the patients of the validation set. - **release_test_patients.zip**: a CSV file containing the patients of the test set. ## Evidence Description Each evidence in the `release_evidences.json` file is described using the following entries: - **name**: name of the evidence. - **code_question**: a code allowing to identify which evidences are related. Evidences having the same `code_question` form a group of related symptoms. The value of the `code_question` refers to the evidence that need to be simulated/activated for the other members of the group to be eventually simulated. - **question_fr**: the query, in French, associated to the evidence. - **question_en**: the query, in English, associated to the evidence. - **is_antecedent**: a flag indicating whether the evidence is an antecedent or a symptom. - **data_type**: the type of evidence. We use `B` for binary, `C` for categorical, and `M` for multi-choice evidences. - **default_value**: the default value of the evidence. If this value is used to characterize the evidence, then it is as if the evidence was not synthesized. - **possible-values**: the possible values for the evidences. Only valid for categorical and multi-choice evidences. - **value_meaning**: The meaning, in French and English, of each code that is part of the `possible-values` field. Only valid for categorical and multi-choice evidences. ## Pathology Description The file `release_conditions.json` contains information about the pathologies that patients in the datasets may suffer from. Each pathology has the following attributes: - **condition_name**: name of the pathology. - **cond-name-fr**: name of the pathology in French. - **cond-name-eng**: name of the pathology in English. - **icd10-id**: ICD-10 code of the pathology. - **severity**: the severity associated with the pathology. The lower the more severe. - **symptoms**: data structure describing the set of symptoms characterizing the pathology. Each symptom is represented by its corresponding `name` entry in the `release_evidences.json` file. - **antecedents**: data structure describing the set of antecedents characterizing the pathology. Each antecedent is represented by its corresponding `name` entry in the `release_evidences.json` file. ## Patient Description Each patient in each of the 3 sets has the following attributes: - **AGE**: the age of the synthesized patient. - **SEX**: the sex of the synthesized patient. - **PATHOLOGY**: name of the ground truth pathology (`condition_name` property in the `release_conditions.json` file) that the synthesized patient is suffering from. - **EVIDENCES**: list of evidences experienced by the patient. An evidence can either be binary, categorical or multi-choice. A categorical or multi-choice evidence is represented in the format `[evidence-name]_@_[evidence-value]` where [`evidence-name`] is the name of the evidence (`name` entry in the `release_evidences.json` file) and [`evidence-value`] is a value from the `possible-values` entry. Note that for a multi-choice evidence, it is possible to have several `[evidence-name]_@_[evidence-value]` items in the evidence list, with each item being associated with a different evidence value. A binary evidence is represented as `[evidence-name]`. - **INITIAL_EVIDENCE**: the evidence provided by the patient to kick-start an interaction with an ASD/AD system. This is useful during model evaluation for a fair comparison of ASD/AD systems as they will all begin an interaction with a given patient from the same starting point. The initial evidence is randomly selected from the binary evidences found in the evidence list mentioned above (i.e., `EVIDENCES`) and it is part of this list. - **DIFFERENTIAL_DIAGNOSIS**: The ground truth differential diagnosis for the patient. It is represented as a list of pairs of the form `[[patho_1, proba_1], [patho_2, proba_2], ...]` where `patho_i` is the pathology name (`condition_name` entry in the `release_conditions.json` file) and `proba_i` is its related probability. ## Note: We hope this dataset will encourage future works for ASD and AD systems that consider the differential diagnosis and the severity of pathologies. It is important to keep in mind that this dataset is formed of synthetic patients and is meant for research purposes. Given the assumptions made during the generation process of this dataset, we would like to emphasize that the dataset should not be used to train and deploy a model prior to performing rigorous evaluations of the model performance and verifying that the system has proper coverage and representation of the population that it will interact with. It is important to understand that the level of specificity, sensitivity and confidence that a physician will seek when evaluating a patient will be influenced by the clinical setting. The dataset was built for acute care and biased toward high mortality and morbidity pathologies. Physicians will tend to consider negative evidences as equally important in such a clinical context in order to evaluate high acuity diseases. In the creation of the DDXPlus dataset, a small subset of the diseases was chosen to establish a baseline. Medical professionals have to consider this very important point when reviewing the results of models trained with this dataset, as the differential is considerably smaller. A smaller differential means less potential evidences to collect. It is thus essential to understand this point when we look at the differential produced and the evidence collected by a model based on this dataset. For more information, please check our [paper](https://arxiv.org/abs/2205.09148).
WhiteRabbitNeo/Code-Functions-Level-General
--- license: apache-2.0 ---
acon96/Home-Assistant-Requests
--- license: mit task_categories: - question-answering - text-generation tags: - automation - home - assistant language: - en pretty_name: Home Assistant Requests size_categories: - 10K<n<100k --- # Home Assistant Requests Dataset This dataset contains a list of requests and responses for a user interacting with a personal assistant that controls an instance of [Home Assistant](https://www.home-assistant.io/). The dataset is generated from the different CSV "piles". The "piles" contain different chunks of requests that are assembled into a final context that is presented to the LLM. For example, `piles/pile_of_device_names.csv` contains only names of various devices to be used as part of context as well as inserted into `piles/pile_of_templated_actions.csv` and `piles/pile_of_status_requests.csv`. The logic for assembling the final dataset from the piles is contained in [generate_home_assistant_data.py](./generate_home_assistant_data.py). ## Generating the dataset from piles `python3 generate_home_assistant_data.py --train --test --large --sharegpt` Supported dataset splits are `--test`, `--train`, & `--sample` Arguments to set the train dataset size are `--small`, `--medium`, `--large`, & `--xl`. Supported formats are `--raw_corpus` (chatml formatted) & `--sharegpt` ## Merging with other instruct-datasets for training `python3 generate_home_assistant_data.py --merge <dataset>` Supported datasets right now are: - `alpaca` - `wizardlm70k` Please note that the supported datasets all have different licenses. Be aware that the license of the resulting data mixture might be different that the license of this dataset alone. ## Adding a new personality In order to add a new personality, you need to define a new system prompt and new set of responses for the assistant. The system prompt is the description of the assistant's behavior that occurs at the start of the context. The responses are what is said back to the user when performing a task. The model should stil respond with the correct service call no matter what the assistant's response is. The list of system prompts are stored in `pile_of_system_prompts.csv`, and the list of responses are stored in `pile_of_responses.csv` There are 2 columns in `pile_of_system_prompts.csv`: - `persona`: the name of the persona - `prompt`: the system prompt to use for that persona. Recommended to put this in quotes in case the prompt also has commas in it The response pile is a CSV with the following headers: `service,response,language,persona,short` - `service`: the service name that we are responding to. Make sure you cover enough different services so that the model can learn how to respond in all situations. - `resposne`: the text of the repsonse. Recommended to put this in quotes in case the response also has commas in it - `language`: the language code of the response (currently only `en` is supported) - `persona`: the name of the persona the response belongs to. Use the name of your persona here - `short`: either 0 or 1. If it is 1 then the response is considered "short', and can be combined together with other "short" repsonses using "and". These are used for examples where there are multiple service calls Generating the full dataset using the python script will print out a warning for any responses that are missing for a persona ## Adding new Home Assistant functionality TODO <!-- In order to add new home assistant device types, you will need to add data to a handful of piles, as well as make small modifications to the `generate_home_assistant_data.py` script. 1. Add 15-30 new device names with the new type to the `pile_of_device_names.csv`. This should be an entity_id and a 'friendly name' 2. Add -->
infgrad/retrieval_data_llm
--- license: mit language: - zh size_categories: - 100K<n<1M --- 带有难负例的检索训练数据。约20万。 文件格式:jsonl。单行示例: ``` {"Query": "大熊猫的饮食习性", "Positive Document": "大熊猫主要以竹子为食,但也会吃水果和小型动物。它们拥有强壮的颌部和牙齿,能够咬碎竹子坚硬的外壳。", "Hard Negative Document": "老虎是肉食性动物,主要捕食鹿、野猪等大型动物。它们的牙齿和爪子非常锋利,是捕猎的利器。"} ```
HathawayLiu/housing_dataset
--- language: - en size_categories: - 100K<n<1M tags: - housing - permits - Seattle dataset_info: features: - name: PermitNum dtype: string - name: PermitClass dtype: string - name: PermitClassMapped dtype: string - name: PermitTypeMapped dtype: string - name: PermitTypeDesc dtype: string - name: Description dtype: string - name: HousingUnits dtype: int64 - name: HousingUnitsRemoved dtype: int64 - name: HousingUnitsAdded dtype: int64 - name: EstProjectCost dtype: float32 - name: AppliedDate dtype: string - name: IssuedDate dtype: string - name: ExpiresDate dtype: string - name: CompletedDate dtype: string - name: StatusCurrent dtype: string - name: RelatedMup dtype: string - name: OriginalAddress1 dtype: string - name: OriginalCity dtype: string - name: OriginalState dtype: string - name: OriginalZip dtype: int64 - name: ContractorCompanyName dtype: string - name: Link dtype: string - name: Latitude dtype: float32 - name: Longitude dtype: float32 - name: Location1 dtype: string - name: NeighborDistrict dtype: string splits: - name: train num_bytes: 47214591 num_examples: 97541 - name: test num_bytes: 11802066 num_examples: 24388 download_size: 18076020 dataset_size: 59016657 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Dataset Card for Housing_Dataset This typical dataset contains all the building permits issued or in progress within the city of Seattle starting from 2000 to recent, and this dataset is still updating as time flows. Information includes permit records urls, detailed address, and building costs etc., which will be presented in the `housing_dataset.py` file and the following description ## Dataset Details ### Dataset Description This [**Seattle Housing permits dataset**](https://data.seattle.gov/Permitting/Building-Permits/76t5-zqzr/about_data) is authorized by Seattle Government and could be found in Seattle Government open data portal. The Building Permits dataset from the City of Seattle's Open Data portal provides comprehensive information about building permits issued or currently in progress within Seattle. This dataset, which dates back to 1990 and continues to be updated, includes a wide range of details such as permit numbers, types, descriptions, estimated project costs, and related contractor information could be found in the .csv table in the official website, which in total contains 25 columns. Moreover, Seattle is divided in 13 Neighborhood District. Based on the [Seattle Neighborhood District GeoJson File](https://data-seattlecitygis.opendata.arcgis.com/datasets/SeattleCityGIS::neighborhood-map-atlas-districts/about) found on Seattle government website, there will a new column created, namely NeighborhoodDistrict. With the provided GeoJson file, every housing will be assigned to the corresponding neighborhood district using the `Latitude` and `Longitude` columns in the csv for future usage. - **Curated by:** [Seattle Government Open data portal](https://data.seattle.gov/) - **Language(s) (NLP):** [English] - **License:** [Public Domain by Seattle Government](http://www.seattle.gov/sdci) ### Dataset Sources - **Offical Website:** [https://data.seattle.gov/] - **Repository for Cleaned Dataset:** [https://github.com/HathawayLiu/Housing_dataset] ## Uses The Building Permits dataset from the City of Seattle is intended for use in various urban development and research applications. It can assist in understanding building trends in Seattle, aid city planning, and support academic research on urban development. The dataset is also a valuable tool for residents and businesses to stay informed about construction activities and regulations in the city. Specifically for residents, this dataset provides starting information for choosing future housing by looking at housing cost, neighborhood district, and other information in the dataset. Additionally, it supports transparency and public engagement in city planning processes. ### Direct Use The Building Permits dataset from the City of Seattle is suitable for several use cases: - **Urban Planning and Development:** Planners and developers can analyze trends in building permits to inform city development strategies and infrastructure planning. - **Academic Research:** Researchers in urban studies, economics, and social sciences can use the data for studies on urban growth, housing, and economic activity. - **Real Estate Analysis:** Real estate professionals can assess building activities in neighborhoods for market analysis and investment decisions. - **Public Awareness:** The general public can use this data to stay informed about construction activities and developmental changes in their community. - **Government and Policy Making:** Local government officials can utilize this data to make informed decisions on housing policies, zoning laws, and community development projects. - **Residents housing choice:** Residents could access this dataset for relevant information for their future housing choice. ### Out-of-Scope Use The Building Permits dataset from the City of Seattle should not be used for purposes that could infringe on privacy or for activities that are not in line with ethical standards. This includes any form of misuse or malicious use such as targeting individuals or businesses based on the information provided in the dataset. Additionally, the dataset may not be suitable for applications requiring highly specialized or non-public information about building structures, as it primarily contains permit-related data. ## Dataset Structure The cleaned and modified full dataset[`Building_Permits_Cleaned.csv`], the splited train[`housing_train_dataset.csv`] and test[`housing_test_dataset.csv`] dataset are provided in the following Github Repo: [https://github.com/HathawayLiu/Housing_dataset]. The cleaned train and test dataset are also provided in the **`data`** folder of this repo. The cleaned dataset in total contains 26 columns: - **`PermitNum`(string):** The tracking number used to refer to this permit in SDCI's tracking system. - **`PermitClass`(string):** The permit class tells you the type of project. - **`PermitClassMapped`(string):** A description of whether the permit is for a residential or non-residential project. - **`PermitTypeMapped`(string):** The permit type by category, such as building, demolition, roofing, grading, and environmentally critical areas. - **`PermitTypeDesc`(string):** Additional information about the type of permit. For example, whether it is an addition/alternation or a new project. - **`Description`(string):** A brief description of the work that will be done under this permit. This description is subject to change before SDCI issues the permit. The description is generally more stable if we have issued the permit. Very long descriptions have been truncated. - **`HousingUnits`(int):** The number of housing units included at the beginning of the project. - **`HousingUnitsRemoved`(int)** The number of housing units removed during the project. - **`HousingUnitsAdded`(int):** The number of housing units added during the project. - **`EstProjectCost`(float):** The estimated project cost of the work being proposed is based on fair market value (parts plus labor). The estimated cost (if any) represents the best available information to date, and is subject to change if the project is modified. We do not collect the estimated project cost for all permit types. - **`AppliedDate`(string):** The date SDCI accepted the application as a complete submittal. - **`IssuedDate`(string):** The date SDCI issued the permit. If there is an Application Date but no Issue Date, this generally means the application is still under review. - **`ExpiresDate`(string):** The date the application is due to expire. Generally, this is the date by which work is supposed to be completed (barring renewals or further extensions). If there is not an Expiration Date, this generally means the permit has not been issued. - **`CompletedDate`(string):** The date the permit had all its inspections completed. If there is an Issue Date but not a Completed Date, this generally means the permit is still under inspection. - **`RelatedMup`(string):** The land use permit that is related to this building permit, if there is one. - **`OriginalAddress1`(string):** The street name and number of the project. - **`OriginalCity`(string):** The city for the project's address. - **`OriginalState`(string):** The state for the project's address. - **`OriginalZip`(string):** The Zip code for the project's address. - **`ContractorCompanyName`(string):** The contractor(s) associated with this permit. - **`Link`(string):** A link to view full details and current status information about this permit at SDCI's website. - **`Latitude`(float):** Latitude of the worksite where permit activity occurs. May be missing for a small number of permits considered "unaddressable." - **`Longitude`(float):** Longitude of the worksite where permit activity occurs. May be missing for a small number of permits considered "unaddressable." - **`Location1`(string):** The latitude and longitude location for mapping purposes. - (New added column)**`NeighborhoodDistrict`(string):** The district that the housing belongs to according to location ## Dataset Creation ### Curation Rationale The Building Permits dataset from the City of Seattle was created to foster transparency, public awareness, and engagement in the city's urban development processes. It provides residents, professionals, and researchers with detailed information about building activities, facilitating informed decision-making and community involvement in city planning and development. Regarding the importance fo 13 neighborhood districts in Seattle, the new added columns for corresponding neighborhood district gives chance for residents and government to investigate the building activities and life quality in the aspect of different neighborhood districts. The dataset supports the city's commitment to open data and the promotion of data-driven insights for improving urban infrastructure and living conditions. #### Data Collection and Processing The Building Permits dataset is collected by Seattle Government where it contains all of the recent information about housing permits in Seattle. The dataset is published on Seattle Government Open Data Portal and it's keep updating along with time. You can download the raw data from [Seattle Government Website](https://data.seattle.gov/Permitting/Building-Permits/76t5-zqzr/about_data) in different formats. For my own purpose I downloaded the CSV version that updated until the modified time of this repo and you can find it in the following Github Repo:[https://github.com/HathawayLiu/Housing_dataset] (File name: `Building_Permits_20240213.csv`). To process and clean the dataset, I did the following steps: 1. Pre-process the data to make sure that they are in the correct types. 2. Use the provided `latitude` and `longitude` columns in the dataset along with Google GeoCoding API to fill in the blanks for the `OriginalZip`(Zip code) column. 3. Use the provided `latitude` and `longitude` columns and the GeoJSon file of Seattle Neighborhood District to assign building permits to their corresponding neighborhood districts. 4. (The GeoJSon file of Seattle Neighborhood District could be found under this GitHub Repo:[https://github.com/HathawayLiu/Housing_dataset]. You could also download it through Seattle GeoData Portal:https://data-seattlecitygis.opendata.arcgis.com/datasets/SeattleCityGIS::neighborhood-map-atlas-districts/about) 5. Fill in the blanks left in the dataset with `N/A` for easier future use 6. Split the dataset into train and test set for future use. For more details about data cleaning and processing, you could refer to the `data_cleaning.py` file under this repo. Notice that to be able to use the function to get zipcode, you need to use your own API Key. Applying for a Google GeoCoding API is free. You could simply follow this link to apply it: https://developers.google.com/maps/documentation/geocoding/get-api-key You are more than welcome to download the raw data and process the dataset yourself. To load the dataset, you could use the following command: ```python !pip install datasets from datasets import load_dataset dataset = load_dataset("HathawayLiu/housing_dataset", trust_remote_code=True) ``` To generate the exmaple from train/test set, use: ```python next(iter(dataset['train'])) ## next(iter(dataset['test'])) ``` You can see the example from dataset like the following: ``` {'PermitNum': '6075593-CN', 'PermitClass': 'Single Family/Duplex', 'PermitClassMapped': 'Residential', 'PermitTypeMapped': 'Building', 'PermitTypeDesc': 'Addition/Alteration', 'Description': 'Replace existing windows; Upgrade new windows and framing for existing single family residence subject to field inspection', 'HousingUnits': 0, 'HousingUnitsRemoved': 0, 'HousingUnitsAdded': 0, 'EstProjectCost': 43014.0, 'AppliedDate': '10/12/05', 'IssuedDate': '10/12/05', 'ExpiresDate': '4/12/07', 'CompletedDate': '2/1/06', 'StatusCurrent': 'Completed', 'RelatedMup': 'nan', 'OriginalAddress1': '624 NW 88TH ST', 'OriginalCity': 'SEATTLE', 'OriginalState': 'WA', 'OriginalZip': 98117, 'ContractorCompanyName': 'STATEWIDE INC', 'Link': 'https://cosaccela.seattle.gov/portal/customize/LinkToRecord.aspx?altId=6075593-CN', 'Latitude': 47.692996978759766, 'Longitude': -122.36441040039062, 'Location1': '47.69299754, -122.3644121', 'NeighborDistrict': 'Northwest'} ``` #### Who are the source data producers? The Building Permits dataset is originally created and maintained by the City of Seattle, specifically by its Department of Construction and Inspections. This department is responsible for overseeing building and land use in Seattle, ensuring safety and compliance with city codes. The dataset reflects the department's ongoing work in managing and documenting building permits issued in the city. For detailed information, visit the [Seattle Department of Construction & Inspections](https://www.seattle.gov/sdci). ## Bias, Risks, and Limitations The Building Permits dataset from the City of Seattle has both technical and sociotechnical limitations: 1. **Technical Limitations**: - **Data Completeness**: Not all building permits may be captured, especially older records. Data for specific columns like `IssuedDate`, `CompletedDate`, `AppliedDate`, `RelatedMup`, and etc. contains lots of missing values. - **Data Accuracy**: There may be errors or inconsistencies in the data, especially in historical records. - **Timeliness**: The dataset might not be updated in real-time, causing delays in reflecting the most current information. 2. **Sociotechnical Limitations**: - **Privacy Concerns**: Detailed permit data could potentially be used to infer private information about property owners or residents. - **Bias in Planning Decisions**: The data might be used to reinforce existing biases in urban planning, affecting marginalized communities. - **Dependence on Technical Proficiency**: The dataset's utility is limited by the user's ability to interpret and analyze the data effectively. 3. **Bias**: The dataset reflects only permitted construction, not all building activities. This can bias analyses towards formal, recorded developments, overlooking informal or unpermitted construction. 4. **Risk**: Misuse can occur if data is used to unfairly target specific neighborhoods or communities for enforcement or political reasons. These limitations should be considered when using this dataset for research, policy-making, or urban planning. ### Recommendations To address the bias and limitations above, users should intake the following recommendations: - **Cross-Verification**: Use supplementary data sources for a more comprehensive view. - **Privacy and Ethical Use**: Handle data responsibly, respecting privacy and avoiding discriminatory practices. - **Data Cleaning and Validation**: Regularly update and clean the dataset to maintain accuracy and reliability.
Henrychur/MMedBench
--- license: cc-by-4.0 language: - en - zh - ja - fr - ru - es tags: - medical task_categories: - question-answering --- # MMedBench [💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963) The official benchmark for "Towards Building Multilingual Language Model for Medicine". ## Introduction This repo contains MMedBench, a comprehensive multilingual medical benchmark comprising 45,048 QA pairs for training and 8,518 QA pairs for testing. Each sample includes a question, options, the correct answer, and a reference explanation for the selection of the correct answer. To access the data, please download MMedBench.zip. Upon extracting the file, you will find two folders named Train and Test. Each folder contains six .jsonl files, each named after its respective language. Each line in these files represents a sample, with the following attributes for each sample: |Key |Value Type |Description | |------------------|-------------------|-----------------------------------------| |question |String | A string of question | |options |Dict | A dict where key is the index ‘A,B,C,D,E’ and value is the string of option| | |answer_idx |String | A string of right answer idxs. Each idx is split by ','| |rationale |String | A string of explanation for the selection of the correct answer | |human_checked |Bool | Whether the rationale has been manually checked. | |human_check_passed |Bool | Whether the rationale has passed manual check. | Our [GitHub](https://github.com/MAGIC-AI4Med/MMedLM) provides the code for finetuning on the trainset of MMedBench. Check out for more details. ## News [2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963). [2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench. [2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens. [2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/). ## Evaluation on MMedBench The further pretrained MMedLM 2 showcast it's great performance in medical domain across different language. | Method | Size | Year | MMedC | MMedBench | English | Chinese | Japanese | French | Russian | Spanish | Avg. | |------------------|------|---------|-----------|-----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | GPT-3.5 | - | 2022.12 | &#10007; | &#10007; | 56.88 | 52.29 | 34.63 | 32.48 | 66.36 | 66.06 | 51.47 | | GPT-4 | - | 2023.3 | &#10007; | &#10007; | 78.00 | 75.07 | 72.91 | 56.59 | 83.62 | 85.67 | 74.27 | | Gemini-1.0 pro | - | 2024.1 | &#10007; | &#10007; | 53.73 | 60.19 | 44.22 | 29.90 | 73.44 | 69.69 | 55.20 | | BLOOMZ | 7B | 2023.5 | &#10007; | trainset | 43.28 | 58.06 | 32.66 | 26.37 | 62.89 | 47.34 | 45.10 | | InternLM | 7B | 2023.7 | &#10007; | trainset | 44.07 | 64.62 | 37.19 | 24.92 | 58.20 | 44.97 | 45.67 | | Llama\ 2 | 7B | 2023.7 | &#10007; | trainset | 43.36 | 50.29 | 25.13 | 20.90 | 66.80 | 47.10 | 42.26 | | MedAlpaca | 7B | 2023.3 | &#10007; | trainset | 46.74 | 44.80 | 29.64 | 21.06 | 59.38 | 45.00 | 41.11 | | ChatDoctor | 7B | 2023.4 | &#10007; | trainset | 43.52 | 43.26 | 25.63 | 18.81 | 62.50 | 43.44 | 39.53 | | PMC-LLaMA | 7B | 2023.4 | &#10007; | trainset | 47.53 | 42.44 | 24.12 | 20.74 | 62.11 | 43.29 | 40.04 | | Mistral | 7B | 2023.10 | &#10007; | trainset | 61.74 | 71.10 | 44.72 | 48.71 | 74.22 | 63.86 | 60.73 | | InternLM\ 2 | 7B | 2024.2 | &#10007; | trainset | 57.27 | 77.55 | 47.74 | 41.00 | 68.36 | 59.59 | 58.59 | | MMedLM~(Ours) | 7B | - | &#10007; | trainset | 49.88 | 70.49 | 46.23 | 36.66 | 72.27 | 54.52 | 55.01 | | MMedLM\ 2~(Ours) | 7B | - | &#10007; | trainset | 61.74 | 80.01 | 61.81 | 52.09 | 80.47 | 67.65 | 67.30 | - GPT and Gemini is evluated under zero-shot setting through API - Open-source models first undergo training on the trainset of MMedBench before evaluate. ## Contact If you have any question, please feel free to contact qiupengcheng@pjlab.org.cn. ## Citation ``` @misc{qiu2024building, title={Towards Building Multilingual Language Model for Medicine}, author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie}, year={2024}, eprint={2402.13963}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
oscar-corpus/OSCAR-2109
--- pretty_name: OSCAR annotations_creators: - no-annotation language_creators: - found language: - af - als - gsw - am - an - ar - arz - as - ast - av - az - azb - ba - bar - be - bg - bh - bn - bo - bpy - br - bs - bxr - ca - cbk - ce - ceb - ckb - cs - cv - cy - da - de - diq - dsb - dv - el - eml - en - eo - es - et - eu - fa - fi - fr - frr - fy - ga - gd - gl - gn - gom - gu - gv - he - hi - hr - hsb - ht - hu - hy - ia - id - ie - ilo - io - is - it - ja - jbo - jv - ka - kk - km - kn - ko - krc - ku - kv - kw - ky - la - lb - lez - li - lmo - lo - lrc - lt - lv - mai - mg - mhr - min - mk - ml - mn - mr - mrj - ms - mt - mwl - my - myv - mzn - nah - nap - nds - ne - new - nl - nn - 'no' - oc - or - os - pa - pam - pl - pms - pnb - ps - pt - qu - rm - ro - ru - rue - sa - sah - scn - sco - sd - sh - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - tg - th - tk - tl - tr - tt - tyv - ug - uk - ur - uz - vec - vi - vls - vo - wa - war - wuu - xal - xmf - yi - yo - zh license: - cc0-1.0 multilinguality: - multilingual size_categories: unshuffled_deduplicated_af: - 100K<n<1M unshuffled_deduplicated_als: - 1K<n<10K unshuffled_deduplicated_am: - 10K<n<100K unshuffled_deduplicated_an: - 1K<n<10K unshuffled_deduplicated_ar: - 1M<n<10M unshuffled_deduplicated_arz: - 10K<n<100K unshuffled_deduplicated_as: - 1K<n<10K unshuffled_deduplicated_ast: - 1K<n<10K unshuffled_deduplicated_av: - n<1K unshuffled_deduplicated_az: - 100K<n<1M unshuffled_deduplicated_azb: - 1K<n<10K unshuffled_deduplicated_ba: - 10K<n<100K unshuffled_deduplicated_bar: - n<1K unshuffled_deduplicated_bcl: - n<1K unshuffled_deduplicated_be: - 100K<n<1M unshuffled_deduplicated_bg: - 1M<n<10M unshuffled_deduplicated_bh: - n<1K unshuffled_deduplicated_bn: - 1M<n<10M unshuffled_deduplicated_bo: - 10K<n<100K unshuffled_deduplicated_bpy: - 1K<n<10K unshuffled_deduplicated_br: - 10K<n<100K unshuffled_deduplicated_bs: - n<1K unshuffled_deduplicated_bxr: - n<1K unshuffled_deduplicated_ca: - 1M<n<10M unshuffled_deduplicated_cbk: - n<1K unshuffled_deduplicated_ce: - 1K<n<10K unshuffled_deduplicated_ceb: - 10K<n<100K unshuffled_deduplicated_ckb: - 10K<n<100K unshuffled_deduplicated_cs: - 10M<n<100M unshuffled_deduplicated_cv: - 10K<n<100K unshuffled_deduplicated_cy: - 10K<n<100K unshuffled_deduplicated_da: - 1M<n<10M unshuffled_deduplicated_de: - 10M<n<100M unshuffled_deduplicated_diq: - n<1K unshuffled_deduplicated_dsb: - n<1K unshuffled_deduplicated_dv: - 10K<n<100K unshuffled_deduplicated_el: - 1M<n<10M unshuffled_deduplicated_eml: - n<1K unshuffled_deduplicated_en: - 100M<n<1B unshuffled_deduplicated_eo: - 10K<n<100K unshuffled_deduplicated_es: - 10M<n<100M unshuffled_deduplicated_et: - 1M<n<10M unshuffled_deduplicated_eu: - 100K<n<1M unshuffled_deduplicated_fa: - 1M<n<10M unshuffled_deduplicated_fi: - 1M<n<10M unshuffled_deduplicated_fr: - 10M<n<100M unshuffled_deduplicated_frr: - n<1K unshuffled_deduplicated_fy: - 10K<n<100K unshuffled_deduplicated_ga: - 10K<n<100K unshuffled_deduplicated_gd: - 1K<n<10K unshuffled_deduplicated_gl: - 100K<n<1M unshuffled_deduplicated_gn: - n<1K unshuffled_deduplicated_gom: - n<1K unshuffled_deduplicated_gu: - 100K<n<1M unshuffled_deduplicated_he: - 1M<n<10M unshuffled_deduplicated_hi: - 1M<n<10M unshuffled_deduplicated_hr: - 100K<n<1M unshuffled_deduplicated_hsb: - 1K<n<10K unshuffled_deduplicated_ht: - n<1K unshuffled_deduplicated_hu: - 1M<n<10M unshuffled_deduplicated_hy: - 100K<n<1M unshuffled_deduplicated_ia: - n<1K unshuffled_deduplicated_id: - 1M<n<10M unshuffled_deduplicated_ie: - n<1K unshuffled_deduplicated_ilo: - 1K<n<10K unshuffled_deduplicated_io: - n<1K unshuffled_deduplicated_is: - 100K<n<1M unshuffled_deduplicated_it: - 10M<n<100M unshuffled_deduplicated_ja: - 10M<n<100M unshuffled_deduplicated_jbo: - n<1K unshuffled_deduplicated_jv: - 1K<n<10K unshuffled_deduplicated_ka: - 100K<n<1M unshuffled_deduplicated_kk: - 100K<n<1M unshuffled_deduplicated_km: - 100K<n<1M unshuffled_deduplicated_kn: - 100K<n<1M unshuffled_deduplicated_ko: - 1M<n<10M unshuffled_deduplicated_krc: - 1K<n<10K unshuffled_deduplicated_ku: - 10K<n<100K unshuffled_deduplicated_kv: - n<1K unshuffled_deduplicated_kw: - n<1K unshuffled_deduplicated_ky: - 10K<n<100K unshuffled_deduplicated_la: - 10K<n<100K unshuffled_deduplicated_lb: - 10K<n<100K unshuffled_deduplicated_lez: - 1K<n<10K unshuffled_deduplicated_li: - n<1K unshuffled_deduplicated_lmo: - 1K<n<10K unshuffled_deduplicated_lo: - 10K<n<100K unshuffled_deduplicated_lrc: - n<1K unshuffled_deduplicated_lt: - 1M<n<10M unshuffled_deduplicated_lv: - 100K<n<1M unshuffled_deduplicated_mai: - n<1K unshuffled_deduplicated_mg: - 10K<n<100K unshuffled_deduplicated_mhr: - 1K<n<10K unshuffled_deduplicated_min: - n<1K unshuffled_deduplicated_mk: - 100K<n<1M unshuffled_deduplicated_ml: - 100K<n<1M unshuffled_deduplicated_mn: - 100K<n<1M unshuffled_deduplicated_mr: - 100K<n<1M unshuffled_deduplicated_mrj: - n<1K unshuffled_deduplicated_ms: - 100K<n<1M unshuffled_deduplicated_mt: - 10K<n<100K unshuffled_deduplicated_mwl: - n<1K unshuffled_deduplicated_my: - 100K<n<1M unshuffled_deduplicated_myv: - n<1K unshuffled_deduplicated_mzn: - n<1K unshuffled_deduplicated_nah: - n<1K unshuffled_deduplicated_nap: - n<1K unshuffled_deduplicated_nds: - 1K<n<10K unshuffled_deduplicated_ne: - 100K<n<1M unshuffled_deduplicated_new: - 1K<n<10K unshuffled_deduplicated_nl: - 10M<n<100M unshuffled_deduplicated_nn: - 100K<n<1M unshuffled_deduplicated_no: - 1M<n<10M unshuffled_deduplicated_oc: - 1K<n<10K unshuffled_deduplicated_or: - 10K<n<100K unshuffled_deduplicated_os: - 1K<n<10K unshuffled_deduplicated_pa: - 10K<n<100K unshuffled_deduplicated_pam: - n<1K unshuffled_deduplicated_pl: - 10M<n<100M unshuffled_deduplicated_pms: - 1K<n<10K unshuffled_deduplicated_pnb: - 1K<n<10K unshuffled_deduplicated_ps: - 10K<n<100K unshuffled_deduplicated_pt: - 10M<n<100M unshuffled_deduplicated_qu: - n<1K unshuffled_deduplicated_rm: - n<1K unshuffled_deduplicated_ro: - 1M<n<10M unshuffled_deduplicated_ru: - 100M<n<1B unshuffled_deduplicated_sa: - 1K<n<10K unshuffled_deduplicated_sah: - 1K<n<10K unshuffled_deduplicated_scn: - n<1K unshuffled_deduplicated_sd: - 10K<n<100K unshuffled_deduplicated_sh: - 10K<n<100K unshuffled_deduplicated_si: - 100K<n<1M unshuffled_deduplicated_sk: - 1M<n<10M unshuffled_deduplicated_sl: - 100K<n<1M unshuffled_deduplicated_so: - n<1K unshuffled_deduplicated_sq: - 100K<n<1M unshuffled_deduplicated_sr: - 100K<n<1M unshuffled_deduplicated_su: - n<1K unshuffled_deduplicated_sv: - 10M<n<100M unshuffled_deduplicated_sw: - 10K<n<100K unshuffled_deduplicated_ta: - 100K<n<1M unshuffled_deduplicated_te: - 100K<n<1M unshuffled_deduplicated_tg: - 10K<n<100K unshuffled_deduplicated_th: - 1M<n<10M unshuffled_deduplicated_tk: - 1K<n<10K unshuffled_deduplicated_tl: - 100K<n<1M unshuffled_deduplicated_tr: - 10M<n<100M unshuffled_deduplicated_tt: - 10K<n<100K unshuffled_deduplicated_tyv: - n<1K unshuffled_deduplicated_ug: - 10K<n<100K unshuffled_deduplicated_uk: - 1M<n<10M unshuffled_deduplicated_ur: - 100K<n<1M unshuffled_deduplicated_uz: - 10K<n<100K unshuffled_deduplicated_vec: - n<1K unshuffled_deduplicated_vi: - 1M<n<10M unshuffled_deduplicated_vo: - 1K<n<10K unshuffled_deduplicated_wa: - n<1K unshuffled_deduplicated_war: - 1K<n<10K unshuffled_deduplicated_wuu: - n<1K unshuffled_deduplicated_xal: - n<1K unshuffled_deduplicated_xmf: - 1K<n<10K unshuffled_deduplicated_yi: - 10K<n<100K unshuffled_deduplicated_yo: - n<1K unshuffled_deduplicated_yue: - n<1K unshuffled_deduplicated_zh: - 10M<n<100M unshuffled_original_af: - 100K<n<1M unshuffled_original_als: - 1K<n<10K unshuffled_original_am: - 10K<n<100K unshuffled_original_an: - 1K<n<10K unshuffled_original_ar: - 10M<n<100M unshuffled_original_arz: - 100K<n<1M unshuffled_original_as: - 10K<n<100K unshuffled_original_ast: - 1K<n<10K unshuffled_original_av: - n<1K unshuffled_original_az: - 100K<n<1M unshuffled_original_azb: - 10K<n<100K unshuffled_original_ba: - 10K<n<100K unshuffled_original_bar: - n<1K unshuffled_original_bcl: - n<1K unshuffled_original_be: - 100K<n<1M unshuffled_original_bg: - 1M<n<10M unshuffled_original_bh: - n<1K unshuffled_original_bn: - 1M<n<10M unshuffled_original_bo: - 10K<n<100K unshuffled_original_bpy: - 1K<n<10K unshuffled_original_br: - 10K<n<100K unshuffled_original_bs: - 1K<n<10K unshuffled_original_bxr: - n<1K unshuffled_original_ca: - 1M<n<10M unshuffled_original_cbk: - n<1K unshuffled_original_ce: - 1K<n<10K unshuffled_original_ceb: - 10K<n<100K unshuffled_original_ckb: - 100K<n<1M unshuffled_original_cs: - 10M<n<100M unshuffled_original_cv: - 10K<n<100K unshuffled_original_cy: - 100K<n<1M unshuffled_original_da: - 1M<n<10M unshuffled_original_de: - 100M<n<1B unshuffled_original_diq: - n<1K unshuffled_original_dsb: - n<1K unshuffled_original_dv: - 10K<n<100K unshuffled_original_el: - 10M<n<100M unshuffled_original_eml: - n<1K unshuffled_original_en: - 100M<n<1B unshuffled_original_eo: - 100K<n<1M unshuffled_original_es: - 10M<n<100M unshuffled_original_et: - 1M<n<10M unshuffled_original_eu: - 100K<n<1M unshuffled_original_fa: - 10M<n<100M unshuffled_original_fi: - 1M<n<10M unshuffled_original_fr: - 10M<n<100M unshuffled_original_frr: - n<1K unshuffled_original_fy: - 10K<n<100K unshuffled_original_ga: - 10K<n<100K unshuffled_original_gd: - 1K<n<10K unshuffled_original_gl: - 100K<n<1M unshuffled_original_gn: - n<1K unshuffled_original_gom: - n<1K unshuffled_original_gu: - 100K<n<1M unshuffled_original_he: - 1M<n<10M unshuffled_original_hi: - 1M<n<10M unshuffled_original_hr: - 100K<n<1M unshuffled_original_hsb: - 1K<n<10K unshuffled_original_ht: - n<1K unshuffled_original_hu: - 10M<n<100M unshuffled_original_hy: - 100K<n<1M unshuffled_original_ia: - 1K<n<10K unshuffled_original_id: - 10M<n<100M unshuffled_original_ie: - n<1K unshuffled_original_ilo: - 1K<n<10K unshuffled_original_io: - n<1K unshuffled_original_is: - 100K<n<1M unshuffled_original_it: - 10M<n<100M unshuffled_original_ja: - 10M<n<100M unshuffled_original_jbo: - n<1K unshuffled_original_jv: - 1K<n<10K unshuffled_original_ka: - 100K<n<1M unshuffled_original_kk: - 100K<n<1M unshuffled_original_km: - 100K<n<1M unshuffled_original_kn: - 100K<n<1M unshuffled_original_ko: - 1M<n<10M unshuffled_original_krc: - 1K<n<10K unshuffled_original_ku: - 10K<n<100K unshuffled_original_kv: - 1K<n<10K unshuffled_original_kw: - n<1K unshuffled_original_ky: - 100K<n<1M unshuffled_original_la: - 10K<n<100K unshuffled_original_lb: - 10K<n<100K unshuffled_original_lez: - 1K<n<10K unshuffled_original_li: - n<1K unshuffled_original_lmo: - 1K<n<10K unshuffled_original_lo: - 10K<n<100K unshuffled_original_lrc: - n<1K unshuffled_original_lt: - 1M<n<10M unshuffled_original_lv: - 1M<n<10M unshuffled_original_mai: - n<1K unshuffled_original_mg: - 10K<n<100K unshuffled_original_mhr: - 1K<n<10K unshuffled_original_min: - n<1K unshuffled_original_mk: - 100K<n<1M unshuffled_original_ml: - 100K<n<1M unshuffled_original_mn: - 100K<n<1M unshuffled_original_mr: - 100K<n<1M unshuffled_original_mrj: - n<1K unshuffled_original_ms: - 100K<n<1M unshuffled_original_mt: - 10K<n<100K unshuffled_original_mwl: - n<1K unshuffled_original_my: - 100K<n<1M unshuffled_original_myv: - n<1K unshuffled_original_mzn: - 1K<n<10K unshuffled_original_nah: - n<1K unshuffled_original_nap: - n<1K unshuffled_original_nds: - 10K<n<100K unshuffled_original_ne: - 100K<n<1M unshuffled_original_new: - 1K<n<10K unshuffled_original_nl: - 10M<n<100M unshuffled_original_nn: - 100K<n<1M unshuffled_original_no: - 1M<n<10M unshuffled_original_oc: - 10K<n<100K unshuffled_original_or: - 10K<n<100K unshuffled_original_os: - 1K<n<10K unshuffled_original_pa: - 100K<n<1M unshuffled_original_pam: - n<1K unshuffled_original_pl: - 10M<n<100M unshuffled_original_pms: - 1K<n<10K unshuffled_original_pnb: - 1K<n<10K unshuffled_original_ps: - 10K<n<100K unshuffled_original_pt: - 10M<n<100M unshuffled_original_qu: - n<1K unshuffled_original_rm: - n<1K unshuffled_original_ro: - 1M<n<10M unshuffled_original_ru: - 100M<n<1B unshuffled_original_sa: - 10K<n<100K unshuffled_original_sah: - 10K<n<100K unshuffled_original_scn: - n<1K unshuffled_original_sd: - 10K<n<100K unshuffled_original_sh: - 10K<n<100K unshuffled_original_si: - 100K<n<1M unshuffled_original_sk: - 1M<n<10M unshuffled_original_sl: - 1M<n<10M unshuffled_original_so: - n<1K unshuffled_original_sq: - 100K<n<1M unshuffled_original_sr: - 1M<n<10M unshuffled_original_su: - n<1K unshuffled_original_sv: - 10M<n<100M unshuffled_original_sw: - 10K<n<100K unshuffled_original_ta: - 1M<n<10M unshuffled_original_te: - 100K<n<1M unshuffled_original_tg: - 10K<n<100K unshuffled_original_th: - 1M<n<10M unshuffled_original_tk: - 1K<n<10K unshuffled_original_tl: - 100K<n<1M unshuffled_original_tr: - 10M<n<100M unshuffled_original_tt: - 100K<n<1M unshuffled_original_tyv: - n<1K unshuffled_original_ug: - 10K<n<100K unshuffled_original_uk: - 10M<n<100M unshuffled_original_ur: - 100K<n<1M unshuffled_original_uz: - 10K<n<100K unshuffled_original_vec: - n<1K unshuffled_original_vi: - 10M<n<100M unshuffled_original_vo: - 1K<n<10K unshuffled_original_wa: - 1K<n<10K unshuffled_original_war: - 1K<n<10K unshuffled_original_wuu: - n<1K unshuffled_original_xal: - n<1K unshuffled_original_xmf: - 1K<n<10K unshuffled_original_yi: - 10K<n<100K unshuffled_original_yo: - n<1K unshuffled_original_yue: - n<1K unshuffled_original_zh: - 10M<n<100M source_datasets: - original task_categories: - sequence-modeling task_ids: - language-modeling paperswithcode_id: oscar --- # Dataset Card for "oscar" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com) - **Repository:** [github.com/oscar-corpus/corpus](https://github.com/oscar-corpus/corpus) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary OSCAR or **O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [ungoliant](https://github.com/oscar-corpus/ungoliant) architecture. Data is distributed by language in both original and deduplicated form. ### Supported Tasks and Leaderboards OSCAR is mainly inteded to pretrain language models and word represantations. ### Languages All the data is distributed by language, both the original and the deduplicated versions of the data are available. 168 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR. ### Issues OSCAR 21.09 has known issues regarding specific languages. Note that other issues may (and could) be present in other languages. **If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.** |Language code|Language|Issues| |-------------|--------|------| |`tg`|Tajik|[![Tajik issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:tg?label=tg&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Atg+label%3Aver%3A21.09)| |`tr`|Turkish|[![Turkish issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:tr?label=tr&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Atr+label%3Aver%3A21.09)| |`vls`|West Flemish|[![West Flemish issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:vls?label=vls&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aopen+label%3Alang%3Avls+label%3Aver%3A21.09)| |`wuu`|Wu Chinese|[![Wu Chinese issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:wuu?label=wuu&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Awuu+label%3Aver%3A21.09)| |`nap`|Neapolitan|[![Neapolitan issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:nap?label=nap&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Anap+label%3Aver%3A21.09)| |`so`|Somali|[![Somali issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:so?label=so&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Aso+label%3Aver%3A21.09)| |`frr`|Northern Frisian|[![Northern Frisian issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:frr?label=frr&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Afrr+label%3Aver%3A21.09)| |`cbk`|Chavacano|[![Chavacano issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:cbk?label=cbk&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Acbk+label%3Aver%3A21.09)| |`sco`|Scots|[![Scots issues](https://img.shields.io/github/issues/oscar-corpus/corpus/lang:sco?label=sco&style=for-the-badge)](https://github.com/oscar-corpus/corpus/issues?q=is%3Aissue+is%3Aopen+label%3Alang%3Asco+label%3Aver%3A21.09)| ## Dataset Structure We show detailed information for all the configurations of the dataset. ### Data Instances <details> <summary>Click to expand the Data/size information for each language (deduplicated)</summary> #### deduplicated_af * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3287, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:BUOBNDDY3VZKNNUOY33PAWBXEVNDCDJK', 'warc-date': '2021-03-09T04:21:33Z', 'warc-identified-content-language': 'afr,eng', 'warc-record-id': '<urn:uuid:dece1e30-a099-411a-87fd-483791342d48>', 'warc-refers-to': '<urn:uuid:5a35e8b2-0fcb-4600-9d15-f5c6469ddf01>', 'warc-target-uri': 'http://www.northwestnewspapers.co.za/gemsbok/2015-06-18-10-02-17/hoe-om-n-ad-te-plaas/1907-man-betrap-met-jagluiperd-en-leeu-bene', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': 'Stap 2: Tik jou ad in die teks boksie, jy sal sien dat die prys aan ' 'die regterkant van die boksie verander volgens di...'} ``` #### deduplicated_als * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4607, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:URQ53Z4I4KGPHICZYLW2ZOX7OWWCGZUA', 'warc-date': '2021-03-03T16:09:20Z', 'warc-identified-content-language': 'deu,eng', 'warc-record-id': '<urn:uuid:134499db-d54a-4c29-9517-350cacc3d29d>', 'warc-refers-to': '<urn:uuid:073aeb77-b4ed-47eb-b955-27031963acf4>', 'warc-target-uri': 'https://als.m.wikipedia.org/wiki/Neukaledonien', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'D Wirtschaft bestoot vor allem us Handwärk, Bärgbau, Industrii und ' 'Turismus. 40 Kilometer vo dr Hauptstadt Nouméa äwä...'} ``` #### deduplicated_am * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9679, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:YADJOQVUOQHUKJ7BXCKKU4LRFKE3JPOA', 'warc-date': '2021-03-09T04:16:32Z', 'warc-identified-content-language': 'amh,eng', 'warc-record-id': '<urn:uuid:fa02fe22-c72e-42e8-9cb3-89da85a80941>', 'warc-refers-to': '<urn:uuid:ff89f862-5e6a-41aa-bc40-ef1d2f91d258>', 'warc-target-uri': 'http://ethioforum.ethiopiaforums.com/viewtopic.php?f=6&t=3874&p=6511', 'warc-type': 'conversion'}, 'nb_sentences': 10, 'offset': 0}, 'text': '(ፍኖተ ነፃነት) በኢትዮጵያ የአዉሮፓ ሕብረት ልኡካን ቡድን መሪ አምባሳደር ቻንታል ሔበሬሽ፣ በአዉሮፓ ' 'ሕብረት የአፍሪካ ቀንድ እና የሕንድ ዉቂያኖስ አካባቢ ዴስክ ኦፌሴር ቪክቶሪያ ጋርሲ...'} ``` #### deduplicated_an * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 134014, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:OG2T3MJFSLSH33PVI7D3WPXVE6ZFLZ4Z', 'warc-date': '2021-03-08T00:58:33Z', 'warc-identified-content-language': 'ara,fra', 'warc-record-id': '<urn:uuid:0ef1d002-86e7-49c1-ac8a-8ba933d190ee>', 'warc-refers-to': '<urn:uuid:5071f1f7-3350-406d-ad97-f292fe7a2ff0>', 'warc-target-uri': 'http://dorous.ek.la/1-5-a6032874?reply_comm=68653652', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووو...'} ``` #### deduplicated_ar * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 12677, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:NFDDUGANGSGSFXIQAXEGIVHGRLFCUW55', 'warc-date': '2021-03-04T02:22:39Z', 'warc-identified-content-language': 'ara,eng', 'warc-record-id': '<urn:uuid:3ea1e651-68f3-4dde-bfea-7a12e5331084>', 'warc-refers-to': '<urn:uuid:dcecf9ad-1797-44d0-b06a-010c424ba396>', 'warc-target-uri': 'https://elmgals.net/?p=62804', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'مطحنة الكرة في ماسبات - orioloingeu. مطاحن الفرينة في مطحنة الكرة ' 'مراكز بيع الة طحن التوابل بيع ألات لرحي اسعار بيع ا...'} ``` #### deduplicated_arz * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9603, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:6O2LEGAWXAWYSRH2TQNYOWX47ZFWTKRC', 'warc-date': '2021-03-09T03:51:17Z', 'warc-identified-content-language': 'ara', 'warc-record-id': '<urn:uuid:0578411b-367f-4d52-b85c-56b4bb64c0be>', 'warc-refers-to': '<urn:uuid:8777119c-434c-49a1-80a8-f2b23fa0e21c>', 'warc-target-uri': 'https://www.hko-ommen.nl/Nov_01/605.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'مستعملة 4265 كسارات للبيع - كسارة الحجر. كسارات مستعمله للبيع فى ' 'مصر. للبيع كسارات فى مصرمطلوب كسارات حجر مستعملة للب...'} ``` #### deduplicated_as * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9280, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DORQKORQ4TURDN35T75TW72IZ7IZIEFG', 'warc-date': '2021-03-03T15:06:57Z', 'warc-identified-content-language': 'asm,eng', 'warc-record-id': '<urn:uuid:fd6c3650-f91f-4f03-ae7a-bea654e043bb>', 'warc-refers-to': '<urn:uuid:48f057d6-f642-42d2-8de1-fec8e4fca4d4>', 'warc-target-uri': 'https://assam.nenow.in/%E0%A6%95%E0%A6%BE%E0%A6%87%E0%A6%B2%E0%A7%88%E0%A7%B0-%E0%A6%AA%E0%A7%B0%E0%A6%BE-%E0%A6%AF%E0%A7%8B%E0%A7%B0%E0%A6%B9%E0%A6%BE%E0%A6%9F%E0%A6%A4-%E0%A6%86%E0%A7%B0%E0%A6%AE%E0%A7%8D%E0%A6%AD/', 'warc-type': 'conversion'}, 'nb_sentences': 8, 'offset': 0}, 'text': 'যোৰহাট জিলাৰ এন আৰ চি উন্নিতকৰণৰ প্ৰথম পৰ্য্যায়ৰ বংশবৃক্ষ পৰীক্ষণৰ ' 'কাম কাইলৈৰ পৰা পৰীক্ষামূলকভাৱে আৰু ১৯ ফেব্ৰুৱাৰিৰ ...'} ``` #### deduplicated_ast * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3752, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:BU44BHPYU2BOWH4TUAY7ZOEBFVQ6KD44', 'warc-date': '2021-03-01T15:56:44Z', 'warc-identified-content-language': 'spa', 'warc-record-id': '<urn:uuid:2b3ca12f-6614-4662-a4e9-16e1ce13a8b0>', 'warc-refers-to': '<urn:uuid:0e132db0-e0f4-44c5-ab63-48b7594a35a6>', 'warc-target-uri': 'https://elsummum.es/tag/dial-traxel-pais/', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Esta ye la galería d’imáxenes de los participantes nel concursu, el ' 'xuráu y dellos miembros de la organización de la ...'} ``` #### deduplicated_av * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2012, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:EULKS66PQCWWVXHNRPSISI72G3GFJD7L', 'warc-date': '2021-03-01T10:13:53Z', 'warc-identified-content-language': 'rus,eng', 'warc-record-id': '<urn:uuid:c2986179-7947-4184-9df5-dca05c987055>', 'warc-refers-to': '<urn:uuid:8b3e82e1-0964-4677-8b39-9bd3c67be25b>', 'warc-target-uri': 'http://gazetalevashi.ru/articles/media/2019/10/25/diktant-tiobitiana/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Дагъистаналъул жамгIият рахьдал мацIал цIуниялде ва ' 'церетIезариялде, тарих, гIадатал, маданият ва дагъистаналъул ' 'халк...'} ``` #### deduplicated_az * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 59868, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:LDASIZ5NDJU6NRCJW7XCCI4QRLFIZZQX', 'warc-date': '2021-02-26T04:13:32Z', 'warc-identified-content-language': 'aze', 'warc-record-id': '<urn:uuid:a35cc521-926e-442d-b285-299ea4a3b72a>', 'warc-refers-to': '<urn:uuid:b60fd7ea-7056-4ebb-8ae5-eb02617ca8cd>', 'warc-target-uri': 'https://azrefs.org/iqtisadi-tesebbuslere-yardim-ictimai-birliyi-yerli-seviyyede-i.html', 'warc-type': 'conversion'}, 'nb_sentences': 70, 'offset': 0}, 'text': 'İQTİsadi TƏŞƏBBÜSLƏRƏ yardim iCTİMAİ BİRLİYİ Yerli səviyyədə içməli ' 'su təchizatı sisteminin idarə olunması\n' 'Az1009, Az...'} ``` #### deduplicated_azb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 5245, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XWTKHZGKVJI6ZAIKSTOA4AOP5PCWI2SH', 'warc-date': '2021-03-05T13:35:27Z', 'warc-identified-content-language': 'fas,uzb,eng', 'warc-record-id': '<urn:uuid:41816fd7-985e-4e35-b79b-bf471e68dd80>', 'warc-refers-to': '<urn:uuid:5717a90d-021c-428b-a69d-45d6cb2fc692>', 'warc-target-uri': 'https://azb.wikipedia.org/wiki/%D8%A2%D9%85%D8%B3%D8%AA%D8%B1%D8%AF%D8%A7%D9%85_%D8%A8%DB%8C%D9%84%DB%8C%D9%85%E2%80%8C%DB%8C%D9%88%D8%B1%D8%AF%D9%88', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'یازی Creative Commons Attribution-ShareAlike ' 'License;آلتیندا\u200cدیر آرتیق شرطلر آرتیریلا بیلر. آرتیق ایطلاعات ' 'اوچون ایشل...'} ``` #### deduplicated_ba * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9444, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:NRTIKDSYAPTPQ64CKKLNR6TFVUYG7CLR', 'warc-date': '2021-03-09T04:46:56Z', 'warc-identified-content-language': 'uig,eng', 'warc-record-id': '<urn:uuid:b69f43f4-0e19-4cad-b083-fce91a40f64b>', 'warc-refers-to': '<urn:uuid:3176da53-14ff-4f65-91e4-4d209e9c7190>', 'warc-target-uri': 'https://uyghurix.net/archives/date/2016/05?uls=us', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': 'линакис системисиниң көрүнмә йүзи барғансери ишлитишкә қулайлиқ ' 'болуп, кәң ишлитиливатқан болсиму, әмили хизмәттә йән...'} ``` #### deduplicated_bar * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 105623, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:L7EXHEWTVKPV7BWPZJFKHM2TZ3ZNKPWC', 'warc-date': '2021-03-07T18:33:16Z', 'warc-identified-content-language': 'fra', 'warc-record-id': '<urn:uuid:578af8ce-2149-42e3-978c-5191caaaca8c>', 'warc-refers-to': '<urn:uuid:a7afc792-983c-43b7-9b5b-75b2dc5fcd77>', 'warc-target-uri': 'https://fr.readkong.com/page/automne-hiver-printemps-2017-8342349', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': ' ' 'vo\n' ' ...'} ``` #### deduplicated_be * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3159, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TEJML7M4S55254DZU43DXXORKPZMKGUL', 'warc-date': '2021-03-09T05:47:09Z', 'warc-identified-content-language': 'bel,eng', 'warc-record-id': '<urn:uuid:e22883c9-5622-4a0e-b259-b5265e6e345a>', 'warc-refers-to': '<urn:uuid:7ec2102d-2645-4fd9-89b8-557762996439>', 'warc-target-uri': 'https://be-tarask.wikipedia.org/wiki/%D0%9A%D0%B0%D1%82%D1%8D%D0%B3%D0%BE%D1%80%D1%8B%D1%8F:%D0%9F%D1%80%D1%8D%D1%81%D0%BD%D0%B0%D1%8F_%D0%B2%D0%B0%D0%B4%D0%B0', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Гэты тэкст даступны на ўмовах ліцэнзіі Creative Commons ' 'Attribution/Share-Alike 3.0; у асобных выпадках могуць ужывац...'} ``` #### deduplicated_bg * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 23651, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:QDAV5ZVRR2IGND4ANWTVOBPNO2POZUEQ', 'warc-date': '2021-03-08T21:47:20Z', 'warc-identified-content-language': 'bul', 'warc-record-id': '<urn:uuid:0e422a1d-ac8c-4f21-bb71-e5c65282f30c>', 'warc-refers-to': '<urn:uuid:0109dba6-8f1a-4047-bdd5-cbcc38de63a8>', 'warc-target-uri': 'http://europe.bg/bg/bulgariya-poluchava-resor-inovacii-i-mladezh', 'warc-type': 'conversion'}, 'nb_sentences': 37, 'offset': 0}, 'text': 'От хилядите кубинци и другите граждани на страните от СИВ, ' 'командировани на строежа на АЕЦ-а, в Белене е останал само...'} ``` #### deduplicated_bh * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9021, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:IN7PHDOP7MZD6RHN6KIJ7SXTY7VC76SK', 'warc-date': '2021-03-08T22:57:31Z', 'warc-identified-content-language': 'hin,eng', 'warc-record-id': '<urn:uuid:62e18c96-cd2c-461b-93d9-900d95eec89e>', 'warc-refers-to': '<urn:uuid:73ee6388-6f0a-460d-ac2e-bbc1a2b63bb4>', 'warc-target-uri': 'https://bh.wikipedia.org/wiki/%E0%A4%B6%E0%A5%8D%E0%A4%B0%E0%A5%87%E0%A4%A3%E0%A5%80:%E0%A4%B5%E0%A4%BF%E0%A4%95%E0%A4%BF%E0%A4%AA%E0%A5%80%E0%A4%A1%E0%A4%BF%E0%A4%AF%E0%A4%BE_%E0%A4%97%E0%A5%88%E0%A4%B0-%E0%A4%AE%E0%A5%81%E0%A4%95%E0%A5%8D%E0%A4%A4_%E0%A4%AB%E0%A4%BE%E0%A4%87%E0%A4%B2_%E0%A4%B5%E0%A5%88%E0%A4%A7_%E0%A4%AC%E0%A5%88%E0%A4%95%E0%A4%B2%E0%A4%BF%E0%A4%82%E0%A4%95_%E0%A4%95%E0%A5%87_%E0%A4%B8%E0%A4%BE%E0%A4%A5?from=Ea', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ई एगो छुपावल गइल श्रेणी बाटे। ई पन्ना सभ पर तबले ना लउकी जबले कि ' 'प्रयोगकर्ता के सेटिंग, छुपावल गइल श्रेणी देखावे खाति...'} ``` #### deduplicated_bn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 36198, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:7QRYGJ3YDG7SBTFUVMMALFA6UWNDVLVY', 'warc-date': '2021-03-05T07:10:58Z', 'warc-identified-content-language': 'ben', 'warc-record-id': '<urn:uuid:050c0cdb-562c-49e5-bcb6-7e5350531ea6>', 'warc-refers-to': '<urn:uuid:a3749b59-4285-4e90-ba64-aa9d745c1f46>', 'warc-target-uri': 'https://www.kalerkantho.com/online/business/2020/12/06/982949', 'warc-type': 'conversion'}, 'nb_sentences': 8, 'offset': 0}, 'text': 'নিজস্ব সংবাদদাতা: গাড়ি নয় যেন মানুষের খাঁচা। নেই কোন ভালো বসার ' 'আসন, যা আছে সেগুলো ভাঙ্গাচুরা, ময়লা ও ধুলাবালিতে ভর...'} ``` #### deduplicated_bo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 5059, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XHKOQL5IQBLCVBANFVH66ZZXJZHEEMYW', 'warc-date': '2021-03-03T15:06:26Z', 'warc-identified-content-language': 'zho,bod', 'warc-record-id': '<urn:uuid:3a406f8f-58cd-4990-ae6f-f63dff7e06e3>', 'warc-refers-to': '<urn:uuid:806c4a11-f8cd-49e8-bc22-cae5e0cf6ef2>', 'warc-target-uri': 'http://tcansee.com/goods.php?id=392', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '所有分类 藏学名家名著 国内名家名著 国外名家名著政治 社会 法律 政治 法律 社会 经济文学 艺术 旅游 艺术 文学 旅游宗教 历史 ' '文化 宗教 历史 文化教育 童书 工具书 教辅 童书 工具书语言文字 语言研究 语言 文字期刊 社...'} ``` #### deduplicated_bpy * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8270, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:POHCGWDC32KW74IE26NTJ2UMNX7QRBDB', 'warc-date': '2021-03-05T14:00:16Z', 'warc-identified-content-language': 'ben', 'warc-record-id': '<urn:uuid:d53007ee-ddbe-44e9-8253-235567d2960c>', 'warc-refers-to': '<urn:uuid:0409ce75-26bc-4a60-b08d-4e2b6174127e>', 'warc-target-uri': 'http://pobnapurup.gaibandha.gov.bd/site/page/5dc0a075-18fd-11e7-9461-286ed488c766/%E0%A6%95%E0%A6%BE%E0%A6%B0%E0%A7%8D%E0%A6%AF%E0%A6%BE%E0%A6%AC%E0%A6%B2%E0%A7%80', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'পবনাপুর ইউনিয়ন---কিশোরগাড়ী ইউনিয়নহোসেনপুর ইউনিয়নপলাশবাড়ী ' 'ইউনিয়নবরিশাল ইউনিয়নমহদীপুর ইউনিয়নবেতকাপা ইউনিয়নপবনাপুর ইউনিয়...'} ``` #### deduplicated_br * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3134, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:U353JBWLMC22GRYEIDN4WOSBUOIUMYQT', 'warc-date': '2021-02-24T21:00:25Z', 'warc-identified-content-language': 'bre', 'warc-record-id': '<urn:uuid:49d1650d-aaf5-43b9-b340-326746e88b31>', 'warc-refers-to': '<urn:uuid:04877e5f-6b86-497e-b39c-30a72683261f>', 'warc-target-uri': 'https://br.m.wiktionary.org/wiki/dont', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Sellet e vez ouzh ar bajenn pe ar gevrenn-mañ evel un divraz da ' 'glokaat e brezhoneg. Mar gouezit tra pe dra diwar-ben...'} ``` #### deduplicated_bs * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8483, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:HS77KGP5HJKJASHMW6WSYV326BPGVM35', 'warc-date': '2021-02-24T18:13:58Z', 'warc-identified-content-language': 'bos,hrv', 'warc-record-id': '<urn:uuid:c12f1b14-4194-405e-a059-9af2f7146940>', 'warc-refers-to': '<urn:uuid:31bedcb4-265f-4aa3-8d2c-cfdc64c42325>', 'warc-target-uri': 'http://mojusk.ba/zastrasujuce-slike-tamnice-u-kojoj-je-skolski-domar-silovao-12-godisnjakinju/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Predsjednica Evropske centralne banke Christine Lagarde izjavila je ' 'da njen najveći strah nije da će Evropska...'} ``` #### deduplicated_bxr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6751, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:RELUZWSMYT63FAPLHP55SMNNCSXIQEDX', 'warc-date': '2021-02-26T07:18:33Z', 'warc-identified-content-language': 'mon,rus', 'warc-record-id': '<urn:uuid:efe8d9fa-4329-4479-aa56-43938e8e5370>', 'warc-refers-to': '<urn:uuid:bba3bfb2-b7c7-4605-9f49-34598eac9a5b>', 'warc-target-uri': 'http://soyol.ru/bur/yoho-zanshal/hoityn/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Хүнэй бэе мүнхэ бэшэ. Һүнэһэнэй бэеымнай орхижо, түрэлөө ' 'урилхадань, тэрэнэй хальһан боложо ябаһан бэемнай үхэнэ, газ...'} ``` #### deduplicated_ca * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 30591, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DJYNCXSBI5JH4V3LKGE7YNQBL34E3W5G', 'warc-date': '2021-03-02T21:39:28Z', 'warc-identified-content-language': 'cat,eng', 'warc-record-id': '<urn:uuid:ec350f95-900b-4164-aab3-8a6451228d5b>', 'warc-refers-to': '<urn:uuid:4c8e31b8-3011-4a21-9591-39be0942e121>', 'warc-target-uri': 'https://ca.m.wikipedia.org/wiki/Regne_d%27Ayutthaya', 'warc-type': 'conversion'}, 'nb_sentences': 33, 'offset': 0}, 'text': "El regne d'Ayutthaya va ser un estat a Tailàndia que va existir de " '1351 a 1767 governat per un rei. El rei Rāmadhipat...'} ``` #### deduplicated_cbk * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 151273, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:JCULI5BTSXOFUJYKZPPLMU5BZEZJZEVJ', 'warc-date': '2021-03-04T21:00:26Z', 'warc-identified-content-language': 'ita', 'warc-record-id': '<urn:uuid:ca25bd6b-9a5f-41b5-8b0f-ad437a545cee>', 'warc-refers-to': '<urn:uuid:ac67c26c-c62a-4c3d-9bd9-dd66a78a474f>', 'warc-target-uri': 'https://it.readkong.com/page/note-di-un-anno-di-lavoro-plural-3281543', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': ' ' 'na ' '...'} ``` #### deduplicated_ce * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 5944, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:AXGWUWKZ5HO42LSEO32HWLT77MATHGXB', 'warc-date': '2021-03-03T14:41:28Z', 'warc-identified-content-language': 'eng', 'warc-record-id': '<urn:uuid:1333c910-7921-4bdd-9bb9-1a8322dfa74b>', 'warc-refers-to': '<urn:uuid:9e976ac2-74e4-4e30-8c49-12f2dc1c257c>', 'warc-target-uri': 'https://www.radiomarsho.com/a/27368811.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Апти Бисултанов вина 1959 шарахь. Апти -- гоьваьлла нохчийн ' 'кхузаманахьлера байтанча ву. 1983 шарахь цо чекхъяккхира ...'} ``` #### deduplicated_ceb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8799, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:GSVQUFRLD3BYXEG2ASAEVHR2IH4D7A2S', 'warc-date': '2021-03-09T04:28:21Z', 'warc-identified-content-language': 'ceb,eng', 'warc-record-id': '<urn:uuid:e53f5344-29f5-4e59-8dac-8fdc92d1758f>', 'warc-refers-to': '<urn:uuid:03c0e7e5-b84c-4205-80cc-c3fb3dc82406>', 'warc-target-uri': 'https://www.safesworld.com/ceb/safewell-17ef-small-combination-lock-digital-safe-box-with-electronic-combination.html', 'warc-type': 'conversion'}, 'nb_sentences': 4, 'offset': 0}, 'text': '17EF SERYE Talagsaong design ug madanihon nga kolor naghimo 17EF ' 'popular nga sa taliwala sa mga anak ug mga babaye, k...'} ``` #### deduplicated_ckb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8668, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XZOIJPSX5QTL5QQPQMXEVADFHZTXMP5I', 'warc-date': '2021-03-09T03:25:59Z', 'warc-identified-content-language': 'kur,eng', 'warc-record-id': '<urn:uuid:9fe2f7e9-c158-4b84-a4a3-24e51acbd69e>', 'warc-refers-to': '<urn:uuid:14902cc0-948b-4dcf-bde6-e687ba41212f>', 'warc-target-uri': 'https://www.dastihawkary.org/blog/portfolio/social-harms-of-drugs/?lang=en', 'warc-type': 'conversion'}, 'nb_sentences': 9, 'offset': 0}, 'text': 'وەبیرم دێ\u200c لە كۆتایی هەشتاكانی سەدەی ڕابردوو دیاردەیەك هەبوو ' 'لەنێو گەنجە لادەرەكانی شاری هەولێر و سەرشەقام هەڵدەستان ...'} ``` #### deduplicated_cs * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 17263, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:EJZ477E7PWMVVVM777MHB5DMDHVYEWK6', 'warc-date': '2021-03-05T11:28:42Z', 'warc-identified-content-language': 'ces', 'warc-record-id': '<urn:uuid:6fc03e7f-9768-4f26-89ce-84fa4732e3c0>', 'warc-refers-to': '<urn:uuid:d78128e5-f667-4461-9f0c-2263d75b74a1>', 'warc-target-uri': 'https://www.lidovky.cz/relax/dobra-chut/mak-a-svestky-vyzkousejte-makovec-podle-romana-pauluse.A150427_125913_dobra-chut_ape?recommendationId=00000000-0000-5000-8000-000000000000', 'warc-type': 'conversion'}, 'nb_sentences': 12, 'offset': 0}, 'text': 'Porno motor vyhledávání o nové sedlo masáž se svou. pro měkký sex ' 'voda učitelka kočička videa stránky Starý pár sex n...'} ``` #### deduplicated_cv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4133, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FKR5EKWIFACLGBIK6IKLHTHDNTEZNF3T', 'warc-date': '2021-03-03T14:25:27Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:8140dbf0-2fb0-48d8-a834-c1b052bcc72d>', 'warc-refers-to': '<urn:uuid:cca433fe-6646-4ab7-b5da-f8e17821b43d>', 'warc-target-uri': 'http://chuv-krarm.3dn.ru/blog/vladimir_leontev_savna_masharam_emer_perle_purnar_i/2013-02-08-47', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Сайт авторĕ тата модераторĕ- Михайлов Алексей, Чăваш Республикин ' 'Президенчĕн 2010,2012 çулсенчи стипендиачĕ, Сайт адм...'} ``` #### deduplicated_cy * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1967, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:RNFNJNY7RHGXN5NPEVF2PYNNIWOTDAMJ', 'warc-date': '2021-03-09T03:48:16Z', 'warc-identified-content-language': 'cym,eng', 'warc-record-id': '<urn:uuid:66f063ba-6a33-4f53-9cfb-7dc64a292e89>', 'warc-refers-to': '<urn:uuid:281f9c10-2d7d-4781-82f6-a504f27852a1>', 'warc-target-uri': 'https://cy.wikipedia.org/wiki/John_T._Koch', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Graddiodd o Brifysgol Harvard, gan gymeryd doethuriaeth mewn ' 'Ieithoedd a Llenyddiaethau Celtaidd yn 1985. Bu hefyd yn...'} ``` #### deduplicated_da * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 22154, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:AF2FFBNZQ3TOEEZ3MFDU77CXZ6PVU3ZB', 'warc-date': '2021-03-01T12:49:13Z', 'warc-identified-content-language': 'dan', 'warc-record-id': '<urn:uuid:92fffabd-5d36-4539-b8eb-18a0f2554ddb>', 'warc-refers-to': '<urn:uuid:1970d6bb-474f-448b-a3e1-8a77c9a32cb6>', 'warc-target-uri': 'http://rosamundis.dk/thai-horsens-gode-parfumer-til-m%C3%A6nd/', 'warc-type': 'conversion'}, 'nb_sentences': 16, 'offset': 0}, 'text': 'Mange praler af den sindsro, de har fundet i huler i det ' 'norske/forfaldne franske ferielejligheder etc., hvor de har ...'} ``` #### deduplicated_de * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 11180, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:LLCPCA3RGKMXLYUEA3OZ2KFEEBNEOPE2', 'warc-date': '2021-03-09T01:22:52Z', 'warc-identified-content-language': 'eng,deu', 'warc-record-id': '<urn:uuid:0128ab60-86c8-4dc2-b1cf-57950654ae38>', 'warc-refers-to': '<urn:uuid:ff27032b-b843-4ba3-b1e2-377793173071>', 'warc-target-uri': 'http://bioconcepts.de/views/search.php?term=231&listed=y', 'warc-type': 'conversion'}, 'nb_sentences': 16, 'offset': 0}, 'text': 'Kreismeisterschaften bringen zahlreiche Sunderner Medaillengewinner ' 'und Titelträger - Tischtennis im Sauerland\n' 'Am ver...'} ``` #### deduplicated_diq * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4196, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DTA56M722SM5BZLNADOCPXQGGT32J46O', 'warc-date': '2021-03-06T15:51:03Z', 'warc-identified-content-language': 'tur,srp,nno', 'warc-record-id': '<urn:uuid:b7dcd4a4-b130-4009-88d0-631ca51a7bcc>', 'warc-refers-to': '<urn:uuid:fe4e4ad7-3089-40d2-aa29-f675e3cea0dd>', 'warc-target-uri': 'https://diq.wikipedia.org/wiki/Z%C4%B1wan%C3%AA_Slawki', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Zıwanê Slawki, zıwano merdumanê Slawano. Zıwanê Slawki yew lızgeyê ' 'Zıwananê Hind u Ewropao. Keyeyê Zıwananê Slawki be...'} ``` #### deduplicated_dsb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 20663, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:WWZOAFJJLJ4OHG2PTVLCMP664OR26XCR', 'warc-date': '2021-02-27T22:03:14Z', 'warc-identified-content-language': None, 'warc-record-id': '<urn:uuid:239b7155-8f37-4889-bad8-5bdb0aaa83c2>', 'warc-refers-to': '<urn:uuid:2714b744-a080-4807-a29a-d8f99c80e49c>', 'warc-target-uri': 'https://dsb.m.wikipedia.org/wiki/P%C5%9Bed%C5%82oga:LocMap', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Mjaz tamnjejšej pśedłogu a </noinclude>-kodom mógu pśidatne ' 'kategorije a cuzorěcne wótkaze stojaś. Ewentualne pśikład...'} ``` #### deduplicated_dv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7923, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:ECFUNRNYICXFAZXP5TLM45DPGJX5AHOI', 'warc-date': '2021-02-24T19:53:40Z', 'warc-identified-content-language': 'div,eng', 'warc-record-id': '<urn:uuid:23e2557a-dacc-428c-99fc-e41d4ce2ed95>', 'warc-refers-to': '<urn:uuid:067b6719-0209-49df-8198-27b1954b61b4>', 'warc-target-uri': 'https://dhiislam.com/114288', 'warc-type': 'conversion'}, 'nb_sentences': 7, 'offset': 0}, 'text': 'މީސްތަކުންގެ ފިކުރާއި ކުޅެލުމަށްޓަކައި މިޒަމާނުގެ ވަސީލަތްތަކުގެ ' 'ބޭނުން އެންމެ ރަނގަޅު ގޮތުގައި ހިފަމުންދޭ: ޝެއިޚް ފި...'} ``` #### deduplicated_el * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 12604, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2LXNVVGR3C4G72RLJUJBKUWLZZJ53TPX', 'warc-date': '2021-03-03T11:34:34Z', 'warc-identified-content-language': 'ell,eng', 'warc-record-id': '<urn:uuid:d95ddbe8-2e54-4d61-a6af-227212090684>', 'warc-refers-to': '<urn:uuid:a0e15450-8455-4b2f-ad8f-3858873a538d>', 'warc-target-uri': 'https://www.androsportal.gr/category/topika/nea-syllogwn/', 'warc-type': 'conversion'}, 'nb_sentences': 18, 'offset': 0}, 'text': 'Η ραδιοφωνική διαφήμιση χαρακτηρίζεται από αμεσότητα και οικειότητα ' 'λόγω της στενής σχέσης του μέσου με τους ακροατές...'} ``` #### deduplicated_eml * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 11710, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:OM2W34UTSIJJHAEXEX42BYMZWBB7U3FS', 'warc-date': '2021-03-05T23:48:29Z', 'warc-identified-content-language': 'ita', 'warc-record-id': '<urn:uuid:26a267af-a6de-4e84-b945-411b78b4815a>', 'warc-refers-to': '<urn:uuid:656aaba2-ff1d-4d7c-915a-9a555533aa42>', 'warc-target-uri': 'https://eml.wikipedia.org/wiki/2_(n%C3%B9mer)', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': "Al 2 'l è al prim nùmer prim ed tùta la séri ch'a s cata in di " "nùmer naturèl e anc 'l ùnic ch'al sìa pèra:\n" "Insèm a 'l..."} ``` #### deduplicated_en * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 15201, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:EIQTEGOE4V5SDID2OLTO4PWWCTW3AD5H', 'warc-date': '2021-03-03T18:20:30Z', 'warc-identified-content-language': 'eng', 'warc-record-id': '<urn:uuid:7cec445b-76fe-4ce2-ab43-8a85de680c6f>', 'warc-refers-to': '<urn:uuid:1cf845b2-3015-4f01-abaf-262af4adeba5>', 'warc-target-uri': 'https://www.aqueencitysound.com/2016/05', 'warc-type': 'conversion'}, 'nb_sentences': 28, 'offset': 0}, 'text': 'But the term “extension” also means lengthening. EkhartYoga members ' 'can get to k… Renforcement du dos (muscles para-v...'} ``` #### deduplicated_eo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 27953, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:YO4NP6746IFQDF5KISEPLNFA2QD3PTEO', 'warc-date': '2021-03-09T05:29:46Z', 'warc-identified-content-language': 'epo,eng', 'warc-record-id': '<urn:uuid:5e3bc7b3-723f-4de9-8202-790351a2253f>', 'warc-refers-to': '<urn:uuid:dd5e537a-f340-4418-bc07-487232ea197c>', 'warc-target-uri': 'http://kantaro.ikso.net/cxu?image=kis_kut.png&ns=&tab_details=view&do=media', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Iloj Montri paĝonMalnovaj reviziojRetroligoj Freŝaj ' 'ŝanĝojMedio-administriloIndekso RegistriĝiEnsaluti'} ``` #### deduplicated_es * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8322, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DXIQKIWES4PP64BTGK5BYTJ3TX4RVQSI', 'warc-date': '2021-03-03T23:27:45Z', 'warc-identified-content-language': 'spa,eng', 'warc-record-id': '<urn:uuid:4275a14a-f997-4e58-8cf6-046006d76dab>', 'warc-refers-to': '<urn:uuid:d54d1a7b-1316-4bd1-8147-7a44ec5b3803>', 'warc-target-uri': 'https://www.rcrperu.com/defensoria-del-pueblo-oficina-en-lima-sur-registro-mas-de-3000-casos-durante-el-2020/', 'warc-type': 'conversion'}, 'nb_sentences': 7, 'offset': 0}, 'text': 'Se prevé que a finales de mes haya llegado al 92,5 por ciento de ' 'los centros, aquellos en los que no hay confirmados ...'} ``` #### deduplicated_et * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 57234, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:JU7SWP3ZS36M3ABAEPNTFH37MVI2SLAF', 'warc-date': '2021-02-24T20:43:43Z', 'warc-identified-content-language': 'est', 'warc-record-id': '<urn:uuid:2bbcaa39-7336-4ade-accf-1b582785f731>', 'warc-refers-to': '<urn:uuid:849563c9-8549-4bdc-a09c-d179c8399ae0>', 'warc-target-uri': 'https://cardiaccareclinic.com/chto-luchshe-panangin-ili-kardiomagnil.html', 'warc-type': 'conversion'}, 'nb_sentences': 129, 'offset': 0}, 'text': 'Kas hirmu ei pruugi tekitada hoopis segadus? Näiteks võtame Ukraina ' 'kogemuse. Järsku ilmusid välja lindikestega mehed...'} ``` #### deduplicated_eu * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4248, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:STDEJOH35DPN5UB52OUZJJC4YCN7EH3N', 'warc-date': '2021-03-09T05:11:48Z', 'warc-identified-content-language': 'spa,eus', 'warc-record-id': '<urn:uuid:fb6752f7-5e91-4d0c-b022-71bd5d3ce910>', 'warc-refers-to': '<urn:uuid:faca7a42-20c2-4c4c-bd8a-6d4be5a1adb6>', 'warc-target-uri': 'http://intermedia.eus/la-comunicacion-imprescindible-lo-que-no-debemos-olvidar-de-2015-resumido-en-447/', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Nesken artean bokazio zientifikoak eta teknologikoak sustatzeko ' 'INSPIRA STEAM proiektua ia 120 ikastetxetako 5.000 ik...'} ``` #### deduplicated_fa * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 10411, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VM7Q7TXNMU2SRNHFJSZMBCKU2YVRKI56', 'warc-date': '2021-03-02T11:23:27Z', 'warc-identified-content-language': 'fas', 'warc-record-id': '<urn:uuid:9f666d03-9592-4f59-9111-981a558b3a32>', 'warc-refers-to': '<urn:uuid:8daf3dc1-92dd-4dbf-a339-992c99f09112>', 'warc-target-uri': 'https://zhycan.com/concough/blog/%D9%86%D8%AD%D9%88%D9%87-%D8%AB%D8%A8%D8%AA-%D9%86%D8%A7%D9%85-%DA%A9%D9%86%DA%A9%D9%88%D8%B1-%D8%AF%DA%A9%D8%AA%D8%B1%DB%8C-97-%D8%A7%D8%B9%D9%84%D8%A7%D9%85-%D8%B4%D8%AF-%D8%A7%D9%85/', 'warc-type': 'conversion'}, 'nb_sentences': 16, 'offset': 0}, 'text': 'انجمن دانشجویان پیام نور تبليغات تماس با ما تبلیغات دسته بندی باز / ' 'بسته کردن دسته بندی ها . شرایط اختصاصی برای شغل د...'} ``` #### deduplicated_fi * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 19216, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:5OUEZDSL7KB2VHT2R67YZDER6UO5FHON', 'warc-date': '2021-03-05T00:14:23Z', 'warc-identified-content-language': 'fin,eng', 'warc-record-id': '<urn:uuid:61e0fc42-ceee-4026-ba76-3c8a8addd596>', 'warc-refers-to': '<urn:uuid:c4ba3c9f-5a6c-4de5-8f77-f5beb547315c>', 'warc-target-uri': 'https://kreditassms.eu/arvostelut-treffisivusto-py%C3%B6re%C3%A4-tanssi/', 'warc-type': 'conversion'}, 'nb_sentences': 46, 'offset': 0}, 'text': 'Facebook ulkomaiset morsiamet fantasia lähellä lohja mistä pillua ' 'porno leffat sex treffit karvaiset tussut Thai mass...'} ``` #### deduplicated_fr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 5274, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XUVXOZU2BIT4TIDEVHLLBLUIHRS4L7WV', 'warc-date': '2021-03-03T14:00:24Z', 'warc-identified-content-language': 'fra,eng', 'warc-record-id': '<urn:uuid:76252d00-9672-479c-9580-722614e078f9>', 'warc-refers-to': '<urn:uuid:4a6bde1e-9596-4388-9334-cc473a7c93ee>', 'warc-target-uri': 'https://www.cahier-des-charges.net/produit/modele-cahier-des-charges-de-logiciel-de-gestion-de-processus-metier/', 'warc-type': 'conversion'}, 'nb_sentences': 9, 'offset': 0}, 'text': 'Créée en 1765 par le duc de Villars, alors gouverneur de Provence, ' 'l’École supérieure d’art d’Aix en Provence est un ...'} ``` #### deduplicated_frr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 27381, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DJE2KO4YWWRERKS5JYSK5JCJWYZ6DJHM', 'warc-date': '2021-03-01T03:40:10Z', 'warc-identified-content-language': 'ell', 'warc-record-id': '<urn:uuid:3a2a34ae-1c42-4d2e-bb08-8dabc916ea30>', 'warc-refers-to': '<urn:uuid:caeb39b2-da76-463d-b80c-4917d3dca230>', 'warc-target-uri': 'https://www.sedik.gr/neo/el/%CE%B1%CF%81%CF%87%CE%B5%CE%AF%CE%BF-%CE%B5%CE%BB%CE%B1%CE%B9%CE%BF%CE%BD%CE%AD%CF%89%CE%BD/%CE%B1%CF%81%CF%87%CE%B5%CE%AF%CE%BF-%CE%B5%CE%BB%CE%B1%CE%B9%CE%BF%CE%BD%CE%AD%CF%89%CE%BD-2009/178-178-title', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ' '’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’...'} ``` #### deduplicated_fy * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1807, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:JABSHFJ2L6SQOXPPTBYGZGR24GCEDTTM', 'warc-date': '2021-03-09T04:24:30Z', 'warc-identified-content-language': 'fry', 'warc-record-id': '<urn:uuid:fd1b28cb-20ce-4082-b1ca-40045ed6af73>', 'warc-refers-to': '<urn:uuid:bc50e1f0-6384-4054-8916-2a489e9a0ffd>', 'warc-target-uri': 'https://www.omropfryslan.nl/nijs/201805-gruttere-lisboksstal-tastien', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Melkfeehâlders yn Súdwest-Fryslân kinne tenei makliker ' "lisboksstâlen fergrutsje no't de gemeente de lanlike wet op st..."} ``` #### deduplicated_ga * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3296, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:WF6SCFDXN3NOT7FPKTEFOAMMPKXSEZ2W', 'warc-date': '2021-03-09T04:37:11Z', 'warc-identified-content-language': 'gle', 'warc-record-id': '<urn:uuid:bff39289-dbf7-444c-8df1-382fd46c993d>', 'warc-refers-to': '<urn:uuid:e27ba1c5-5707-4e9f-8ba8-f42c67bd9fc9>', 'warc-target-uri': 'http://nos.ie/cultur/iarratais-a-lorg-don-slam-filiochta-agus-duaischiste-700-ann-i-mbliana/', 'warc-type': 'conversion'}, 'nb_sentences': 6, 'offset': 0}, 'text': 'Tá duaischiste £700 ar fáil do Slam Filíochta Liú Lúnasa a bheidh ' 'ar siúl ar líne ag deireadh na míosa seo chugainn. ...'} ``` #### deduplicated_gd * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7659, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:OO363HOO6EDDYSBTTYB6H4WYAJBBMJ6D', 'warc-date': '2021-03-03T15:22:11Z', 'warc-identified-content-language': 'gla', 'warc-record-id': '<urn:uuid:e24cc86f-ae2c-49f6-b668-cda4f514a34d>', 'warc-refers-to': '<urn:uuid:1739d2d8-974d-4c29-b8d0-3a3ef9082537>', 'warc-target-uri': 'http://gd.cnswmc.com/ty320-3-bulldozer-product/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Tha inneal-brathaidh TY320-3 crochte leth-chruaidh, gluasad ' 'uisgeachaidh, inneal tarbh fo smachd seòrsa hydraulic. Ta...'} ``` #### deduplicated_gl * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4202, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TIH7ARF4FNLH7VRGHXKOWVHNXNXC2HZX', 'warc-date': '2021-03-09T04:47:46Z', 'warc-identified-content-language': 'glg', 'warc-record-id': '<urn:uuid:983dd790-0846-4232-a7b4-3956af0982a8>', 'warc-refers-to': '<urn:uuid:b77207af-29d0-459f-9a55-0b25501d3e8b>', 'warc-target-uri': 'http://concellomuxia.com/item/outras-capelas/', 'warc-type': 'conversion'}, 'nb_sentences': 8, 'offset': 0}, 'text': 'O templo actual é producto de diversas reconstrucións que se ' 'realizaron a finais do século XVII e principios do XVIII...'} ``` #### deduplicated_gn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3873, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FWN62CTWNJKPWUARS4BMBUFU6OVHL6XP', 'warc-date': '2021-02-27T22:49:49Z', 'warc-identified-content-language': 'grn,eng,bih', 'warc-record-id': '<urn:uuid:b4954ced-abe0-487e-b5b0-a26beb751a02>', 'warc-refers-to': '<urn:uuid:be5468f1-47f0-4bd8-a177-3529a14dead7>', 'warc-target-uri': 'https://gn.wikipedia.org/wiki/Apere%27arusu', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Ko ñe\'ẽ "apere\'arusu" ou avañe\'ẽ ñe\'ẽngue "apere\'a" he\'ise ' 'India Tapiti, ha avañe\'ẽ ñe\'ẽngue "rusu" he\'iséva iguasúva.'} ``` #### deduplicated_gom * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8747, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CKNSFAH2KISLLR7222FSQSPENYHQTAX3', 'warc-date': '2021-03-01T11:10:29Z', 'warc-identified-content-language': 'mar', 'warc-record-id': '<urn:uuid:d4622a3e-1b0e-4775-b25d-273ee14ae176>', 'warc-refers-to': '<urn:uuid:9d00e57b-9031-4f86-a9c8-cc3c0c2213a7>', 'warc-target-uri': 'https://gom.m.wikipedia.org/wiki/%E0%A4%B5%E0%A5%80%E0%A4%9C', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'कांय वस्तू रगडल्यो तर तांचेकडेन हलक्यो वस्तू आकर्शित जाता हेंजेन्ना ' 'पळयलें तेन्ना वीज हे ऊर्जेची कल्पना मनशाक आयली.हे...'} ``` #### deduplicated_gu * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 15036, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH', 'warc-date': '2021-03-09T04:48:08Z', 'warc-identified-content-language': 'eng,khm,lao', 'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>', 'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>', 'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ધ્યાન આપો: જો તમે ગુજરા તી બોલતા હો, તો તમને ભા ષા કીય સહાય તા સેવા ' 'ઓ વિ ના મૂલ્યે ઉપલબ્ધ છે. તમા રા આઈડી કાર ્ડ પર આ...'} ``` #### deduplicated_gv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 29707, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TIDW47D4MAHOLY6PQZ5SHLDYQIJ66REQ', 'warc-date': '2021-03-06T18:16:22Z', 'warc-identified-content-language': 'glv,eng', 'warc-record-id': '<urn:uuid:c7a5e531-487b-4e52-96ca-33b658691652>', 'warc-refers-to': '<urn:uuid:fa7285d4-126c-458f-9a72-d0d8615ce494>', 'warc-target-uri': 'https://gv.wikipedia.org/wiki/%C3%87hengoaylleeaght', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Ta çhengoaylleeaght feamagh eiyrt er sheiltynyssyn çhengoaylleeagh ' 'ayns ayrnyn myr ynsaghey çhengaghyn joaree, glare-...'} ``` #### deduplicated_he * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 12254, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:BL56ZUXYO5GLIO6YTBUWKPVYJN2BKCIM', 'warc-date': '2021-03-09T10:29:09Z', 'warc-identified-content-language': 'heb,eng', 'warc-record-id': '<urn:uuid:1ae77825-a836-424e-a8b1-1f9c985a41b9>', 'warc-refers-to': '<urn:uuid:fce3d3dc-979e-4603-82e3-027b75346e52>', 'warc-target-uri': 'https://shop.makeup.land/collections/frontpage', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'הולדת פג היא אירוע מטלטל לכל משפחה, אך הולדת פג בצל מגפת הקורונה ' 'מאתגרת אף יותר? מהם האתגרים עמם מתמודדים ההורים והצו...'} ``` #### deduplicated_hi * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7897, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VZCN5HXN57VQHZJT5G3NWV7RCIT4GP7T', 'warc-date': '2021-02-26T10:18:11Z', 'warc-identified-content-language': 'hin,eng', 'warc-record-id': '<urn:uuid:6cccccb7-be0e-4c16-83be-7b4150b107ac>', 'warc-refers-to': '<urn:uuid:41eda5d1-e2cf-44f4-9f5b-c074a2de89da>', 'warc-target-uri': 'https://36.gurturgoth.com/2019/11/blog-post_8.html', 'warc-type': 'conversion'}, 'nb_sentences': 5, 'offset': 0}, 'text': 'Bill Gates Biography in Hindi, विश्व के सबसे अमीर इंसान और ' 'माइक्रोसॉफ्ट कंपनी के संस्थापक Bill Gates जिसने अपनी बुद्ध...'} ``` #### deduplicated_hr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 41545, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:6NTZEPK7ETF4AOLM3YDZRLRGZAKH7XM3', 'warc-date': '2021-03-09T04:58:04Z', 'warc-identified-content-language': 'hrv,bos,eng', 'warc-record-id': '<urn:uuid:32361cc9-e12a-4861-978a-b94b84efe78c>', 'warc-refers-to': '<urn:uuid:f0476e5f-e04c-4741-94a6-ddbcfb25c17e>', 'warc-target-uri': 'http://mjesec.ffzg.hr/webpac/?rm=results&show_full=1&f=PersonalName&v=Sanader%20Mirjana', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': 'Impresum: Pula : Sveučilište u Zagrebu, Međunarodno središte ' 'hrvatskih sveučilišta u Istri, Međunarodni istraživački ...'} ``` #### deduplicated_hsb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3352, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:E5ZCT5OIZBDV2EFBNX3MSLFJKKMZWQWI', 'warc-date': '2021-03-08T22:15:50Z', 'warc-identified-content-language': None, 'warc-record-id': '<urn:uuid:374a31b4-d38f-4d94-b3df-59013b15e644>', 'warc-refers-to': '<urn:uuid:fa9b7b26-2b4c-4acc-a652-47047617b0c0>', 'warc-target-uri': 'https://www.serbske-nowiny.de/index.php/hsb/z-luzicy/lokalka/item/50643-jednotna-proty-ka-tr-bna', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Žonjace akciske tydźenje zahajene\tDźensniši Mjezynarodny dźeń ' 'žonow je zazběh hač do 22. apryla trajacych ...\t\n' 'Wotstr...'} ``` #### deduplicated_ht * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 17823, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:LXQEYMTPIKHPAYKEKIZF6FCMC6WH66PW', 'warc-date': '2021-02-25T02:48:22Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:a5599306-82ad-4740-9c00-5bba34c96d54>', 'warc-refers-to': '<urn:uuid:2378d2f7-69a4-4f8a-ad03-4d556d031ebb>', 'warc-target-uri': 'http://mywebstores.ru/index.php?id_product=1841&controller=product', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'начать us $ nan us $ nan us $ nan us $ nan us $ nan us $ nan us $ ' 'nan us $ nan us $ nan us $ nan us $ nan us $ nan us...'} ``` #### deduplicated_hu * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 39801, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:B3XHZ4C4AJYQLVV3ESGOVZU6FZ5N5637', 'warc-date': '2021-02-26T07:03:18Z', 'warc-identified-content-language': 'hun', 'warc-record-id': '<urn:uuid:926ed467-3adb-44f5-b33c-63112879ba5a>', 'warc-refers-to': '<urn:uuid:9d9175b4-6b0a-45e8-961b-61e9d50eb684>', 'warc-target-uri': 'https://luminanz.eu/anya-hatartalan-ingyen-videok-pina-nagy-video-video-sex-szekx-hd-videa-nyelvu-%C3%B6reg/', 'warc-type': 'conversion'}, 'nb_sentences': 104, 'offset': 0}, 'text': 'A WordPress egy ingyenesen letölthető rendszer. Letöltés után csak ' 'telepíteni kell a webszerverre és máris használhat...'} ``` #### deduplicated_hy * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6269, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:42PWBXN2Q7PFCRFWIDLTW42KUUGAKQOE', 'warc-date': '2021-02-24T23:49:31Z', 'warc-identified-content-language': 'hye,eng', 'warc-record-id': '<urn:uuid:932d1903-aea7-4be9-abb4-6b3114592c9c>', 'warc-refers-to': '<urn:uuid:cecf676f-884a-4311-a0b5-45ade0f517b7>', 'warc-target-uri': 'https://www.usanogh.am/lur/tramp-amn-coronavirus/', 'warc-type': 'conversion'}, 'nb_sentences': 4, 'offset': 0}, 'text': 'ՀՀ ԳԱԱ Զեկույցներ =Reports NAS RA կիրառում է «Ստեղծագործական ' 'համայնքներ» հեղինակային իրավունքի արտոնագիրը համաձայն որ...'} ``` #### deduplicated_ia * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9479, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:4JBN4SUDHHRPZI3TAVTZ4JUYSSOGGRFX', 'warc-date': '2021-03-01T17:14:58Z', 'warc-identified-content-language': 'ron,eng', 'warc-record-id': '<urn:uuid:5abe05ff-7309-4c3f-8ccd-175a12a655a2>', 'warc-refers-to': '<urn:uuid:8dec50fd-2be1-4bcf-8bb2-8cb9826c2465>', 'warc-target-uri': 'https://www.monitorulsv.ro/Ultima-ora-local/2008-02-18/Campania-electorala-interzisa-in-Primaria-Suceava', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ' 'ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ...'} ``` #### deduplicated_id * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3080, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XU6GIUNYT5ELGH5XSZ4FUARC3YTJAD5P', 'warc-date': '2021-03-05T03:32:56Z', 'warc-identified-content-language': 'ind', 'warc-record-id': '<urn:uuid:2328da88-ee5f-4b4c-af3e-25dc4a574041>', 'warc-refers-to': '<urn:uuid:0781f7e2-f020-402b-b204-71fdf299f956>', 'warc-target-uri': 'https://sulsel.kemenag.go.id/berita/berita-kontributor/stqh-26-tingkat-kabupaten-jeneponto-siap-di-gelar', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': '* Masa berlaku normal poin 1 (satu) tahun dan masa berlaku bonus ' 'poin sampai dengan 31 Desember 2020.\n' 'Diskon dari Ban...'} ``` #### deduplicated_ie * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 16919, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:W7UDGWMCEYQFEIPJMFZKX72Z6MH4XCUP', 'warc-date': '2021-03-08T16:16:42Z', 'warc-identified-content-language': 'ron,eng', 'warc-record-id': '<urn:uuid:f5ba5473-8eb2-41f4-9e43-3d36f14243a1>', 'warc-refers-to': '<urn:uuid:d2784efa-8250-4370-a348-28c640195663>', 'warc-target-uri': 'https://rolabel.info/door/yX-WpseZpNycfXY/luis-gabriel-haziran-te-am-cautat-si-te-am-gasit-official-video.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Va iubesc mult mult mult mult mult mult mult mult mult mult mult ' 'mult mult mult mult mult mult mult mult mult mult mu...'} ``` #### deduplicated_ilo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3511, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:NLHH2LVPZTUZE37ET2FJIRZNOLPLKK4O', 'warc-date': '2021-03-03T15:52:32Z', 'warc-identified-content-language': 'tgl', 'warc-record-id': '<urn:uuid:2fb6a437-41c8-4c2c-9f5d-2e8c34df9f2b>', 'warc-refers-to': '<urn:uuid:bdc072a0-db63-4256-a96b-7515a2c4fdfd>', 'warc-target-uri': 'https://ilo.m.wikipedia.org/wiki/Amphibia', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Daytoy nga artikulo dagiti nangruna nga artikulo ket pungol. ' 'Makatulongka iti Wikipedia babaen ti panagnayon iti daytoy.'} ``` #### deduplicated_io * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3586, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VUQPETM2PUWBL5AGADEVN2FPE7KURXG4', 'warc-date': '2021-03-03T15:22:41Z', 'warc-identified-content-language': 'ara', 'warc-record-id': '<urn:uuid:fd8a899b-d54a-424d-9955-a90b81e16439>', 'warc-refers-to': '<urn:uuid:c40226a6-6851-4009-a834-77a1a3e0c0f3>', 'warc-target-uri': 'https://io.wikipedia.org/wiki/New_Vienna,_Iowa', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': "Segun l'Usana Kontado Ministerio, l'urbo havas entote 1.2 km², " 'equivalanta a 0.4 mi², di qui 1.2 km² (0.4 mi²) esas l...'} ``` #### deduplicated_is * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1829, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DXUGRT4OK7WRCOPGB7AAKLHPUDTBDRO2', 'warc-date': '2021-03-09T04:40:07Z', 'warc-identified-content-language': 'isl', 'warc-record-id': '<urn:uuid:6568bf31-b402-45b8-9ddb-6ce0f3d0a323>', 'warc-refers-to': '<urn:uuid:5daa12c0-604a-4233-9ed8-d4e245af4048>', 'warc-target-uri': 'http://hugvis.hi.is/', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Vegna hertra aðgerða í bará ttunni við Covid19 munum við takmarka ' 'gestafjölda í laugum okkar við 80 manns. Thank you ...'} ``` #### deduplicated_it * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 14112, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:MLJ4TW2HJZAPE2ORVARPJES6GRGO6ZLK', 'warc-date': '2021-03-05T13:56:32Z', 'warc-identified-content-language': 'ita', 'warc-record-id': '<urn:uuid:31d7ebb5-c1f7-468b-92f8-b79b7c28af9f>', 'warc-refers-to': '<urn:uuid:f92f33a2-6940-49fd-a21e-228ee5d2efb1>', 'warc-target-uri': 'https://mauriziomezzetti.com/patologie-trattate/', 'warc-type': 'conversion'}, 'nb_sentences': 47, 'offset': 0}, 'text': 'Il Presidente del Caffè Letterario Quasimodo di Modica, Domenico ' 'Pisana, sarà ospite a Taranto, il prossimo 4 maggio,...'} ``` #### deduplicated_ja * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 16411, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XOFBBBX7LINQS3EZN5VH6OQ7PPFNRICJ', 'warc-date': '2021-03-09T01:09:27Z', 'warc-identified-content-language': 'jpn,eng,lat', 'warc-record-id': '<urn:uuid:5c0685f4-736d-4155-9153-56cf79462df4>', 'warc-refers-to': '<urn:uuid:88586e1b-926d-4291-910f-53680e3d6482>', 'warc-target-uri': 'http://flpj.karapyzi.ru/30', 'warc-type': 'conversion'}, 'nb_sentences': 14, 'offset': 0}, 'text': '番組『日本を元気に!スマイルサプライズ!』が、28日に放送(後7:00)。コロナ禍や自然災害など、日本が長いトンネルに入ってしまったような状態だが、「でも、きっとこの先に明るい出口がある!」と明るい未...\n' 'プリゲーム『ポケモンスマイ...'} ``` #### deduplicated_jbo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6970, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2EVVU2OCTSB5EYCHSV6Z7I3PMQSNNOED', 'warc-date': '2021-03-03T23:28:54Z', 'warc-identified-content-language': None, 'warc-record-id': '<urn:uuid:0d4387a2-391d-4e3e-8772-808face0ab78>', 'warc-refers-to': '<urn:uuid:4e45af2a-aea7-4f1a-af89-6ee5f69b7bfd>', 'warc-target-uri': 'https://jbo.m.wikipedia.org/wiki/mumyma%27i_7moi', 'warc-type': 'conversion'}, 'nb_sentences': 26, 'offset': 0}, 'text': "ni'o 7 la mumast. cu 7moi djedi fi'o masti la mumast. noi ke'a cu " 'mumoi masti .i 6 la mumast. cu purlamdei .ije 8 la ...'} ``` #### deduplicated_jv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8822, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:NPQGATEVIAYLOSLDB22EB7IYDVBZ7N6Q', 'warc-date': '2021-03-09T11:14:25Z', 'warc-identified-content-language': 'jav', 'warc-record-id': '<urn:uuid:db7d8bd7-a3a3-4a30-8786-7efb2352285d>', 'warc-refers-to': '<urn:uuid:2cb85a37-545e-471a-b7e7-cb334112f0e3>', 'warc-target-uri': 'https://jv.wikipedia.org/wiki/Bon%C3%A9kah', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Yèn sadurungé golèkan digawé kanggo awaké dhéwé, wiwit jaman iki ' 'dikomersialakaké. Fungsiné owah saka ritual lan mode...'} ``` #### deduplicated_ka * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 42480, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:HHSMTLZXKA4SQDPDBWAOUFELXBUJZJKO', 'warc-date': '2021-03-06T15:33:35Z', 'warc-identified-content-language': 'kat,eng', 'warc-record-id': '<urn:uuid:7d931f2a-a6ef-4070-9277-2033e7e96b9b>', 'warc-refers-to': '<urn:uuid:89429497-9722-45e6-95a6-699ef7280e6c>', 'warc-target-uri': 'https://ka.m.wikipedia.org/wiki/%E1%83%93%E1%83%90%E1%83%A1%E1%83%A2%E1%83%98%E1%83%9C_%E1%83%B0%E1%83%9D%E1%83%A4%E1%83%9B%E1%83%90%E1%83%9C%E1%83%98', 'warc-type': 'conversion'}, 'nb_sentences': 36, 'offset': 0}, 'text': 'დასტინ ჰოფმანი[1] (ინგლ. Dustin Lee Hoffman დ. 8 აგვისტო, 1937) — ' 'ორგზის კინოაკადემიის ოსკარისა და ექვსგზის ოქროს გლო...'} ``` #### deduplicated_kk * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9197, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:BJW4PLV2UOAJLJO6E55YH7DAEWQTFQUZ', 'warc-date': '2021-03-09T04:35:14Z', 'warc-identified-content-language': 'rus,kaz', 'warc-record-id': '<urn:uuid:ddd1d3e1-3bf3-4c4a-b722-8e293ab16f75>', 'warc-refers-to': '<urn:uuid:097c4f10-4bdc-400d-ab39-c04e4f98f51f>', 'warc-target-uri': 'http://blogs.kazakh.ru/blogs/index.php?page=group&gid=6&id=3&PAGEN_1=3%3Fid%3D2?id=6', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Бұрынғы жоғары лауазымды шенеунік Анатолий Шкарупа (сол жақта) ' 'өзіне қарсы қозғалған қылмыстық іс бойынша өтіп жатқан...'} ``` #### deduplicated_km * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 15036, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH', 'warc-date': '2021-03-09T04:48:08Z', 'warc-identified-content-language': 'eng,khm,lao', 'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>', 'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>', 'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ការជូនដំណឹង៖ ប្រសិនប. ើអ្នកនិយាយភាសា ខ្មែរ សេ វាជំនួយភាសាឥតគិតថ្លៃ ' 'គឺអាចរកបានសម្ រាប ់អ្នក។ សូមទូរស័ព្ទទ ៅផ ្នែ កសេ វ...'} ``` #### deduplicated_kn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8425, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TMWGSQVJMRPZCPMDM5D3AK2YKGMWBZZI', 'warc-date': '2021-03-09T04:21:39Z', 'warc-identified-content-language': 'kan,eng', 'warc-record-id': '<urn:uuid:ca35da96-ee3a-43ad-8082-a10b055200ca>', 'warc-refers-to': '<urn:uuid:a57cc8f6-c5ed-47a2-9322-2259687cdbde>', 'warc-target-uri': 'https://kannada.b4blaze.com/tag/rachitha-ram/', 'warc-type': 'conversion'}, 'nb_sentences': 16, 'offset': 0}, 'text': 'ಅಡಿಗರು ಮತ್ತು ರಾಯರು ಚಾಪೆ ಹಾಸಿ ಸ್ವಲ್ಪ ಹೊತ್ತು ಮಲಗಿ ಕಾಫಿ ಕುಡಿದು ' 'ಹೊರಟುಹೋದಿದ್ದರು. ಜಾತ್ರೆ ದಿನ ಜಗನ್ನಾಥನ ಮನೆಗೆ ಬರಬಹುದಾದ ನೂರಾರು...'} ``` #### deduplicated_ko * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2831, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DLTUACNWU3R5KYI7HMMZF4CYR4WGRMWU', 'warc-date': '2021-02-26T10:13:10Z', 'warc-identified-content-language': 'kor,eng', 'warc-record-id': '<urn:uuid:7f7727bf-bf3d-45c3-8e3c-b595f67f9d90>', 'warc-refers-to': '<urn:uuid:17735508-d2ce-4e0a-a3ba-86acb749b9a2>', 'warc-target-uri': 'http://excel2017.zz.am/entry/mousqul', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': '인류는 최근 수백년 동안 물질적 풍요를 행복의 최대 조건으로 믿고, 이를 추구해 왔다. 그러나 이 과정에서 사람들은 ' '상대방에게 사랑을 베풀기보다는 상처를 입히는 일이 많아졌고, 물질적 풍요는 내면의 충족을 동반...'} ``` #### deduplicated_krc * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4806, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CWWWGTU7JCHS7SR5A7D7QMDTF4JBMCA6', 'warc-date': '2021-02-26T04:08:10Z', 'warc-identified-content-language': 'nno,bih', 'warc-record-id': '<urn:uuid:ef2175c0-4887-4006-9b21-374282abf2d2>', 'warc-refers-to': '<urn:uuid:d5aaef09-6f3c-427a-8c2f-664e639c2a0f>', 'warc-target-uri': 'https://krc.wikipedia.org/wiki/1606_%D0%B4%D0%B6%D1%8B%D0%BB', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Бу, тамамланмагъан статьяды. Сиз болушургъа боллукъсуз проектге, ' 'тюзетиб эм информация къошуб бу статьягъа.'} ``` #### deduplicated_ku * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 12767, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:BQQEDD5HKU6LXDRIDLMWPIESOMEGIUX6', 'warc-date': '2021-03-09T04:11:10Z', 'warc-identified-content-language': 'eng', 'warc-record-id': '<urn:uuid:5a67e5e4-f688-4aa1-a9a0-2e4f6217ef21>', 'warc-refers-to': '<urn:uuid:40fa61be-18d1-4bd5-9267-252720cd5b05>', 'warc-target-uri': 'http://www.peyamakurd.org/kurmanci/Kurdistan/gruben-smo-ye-bi-hawane-li-til-rifete-xistin-3-miri-u-6-birindar', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'PeyamaKurd – Grûbên bi ser Tirkiyê de li Binxetê li bajarokê Til ' 'Rifetê bi hawanê lê dan û di encamê de 3 kes mirin û...'} ``` #### deduplicated_kv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 14161, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:JH3R64H4VMXQ3NRHTX3LO3B4VFN6IZ62', 'warc-date': '2021-03-03T15:09:36Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:a94b390c-8e72-475d-bf76-c523c20908ce>', 'warc-refers-to': '<urn:uuid:e11eee46-e68f-4e1b-b4a3-0b9eeb74a877>', 'warc-target-uri': 'https://kv.wikipedia.org/wiki/%D0%9C%D0%B8%D0%BA%D1%83%D1%88%D0%B5%D0%B2_%D0%90%D0%BD%D0%B0%D1%82%D0%BE%D0%BB%D0%B8%D0%B9_%D0%9A%D0%BE%D0%BD%D1%81%D1%82%D0%B0%D0%BD%D1%82%D0%B8%D0%BD%D0%BE%D0%B2%D0%B8%D1%87', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '1947, моз тӧлысь–1950, кӧч тӧлысь – уджалiс велöдысьöн да ' 'директорöн Сыктывдiн районса Ыб шöр школаын.'} ``` #### deduplicated_kw * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3496, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:S5H4MWHD4QTG74ZNJZ5X63W2XSLUJU7C', 'warc-date': '2021-02-26T18:49:31Z', 'warc-identified-content-language': 'cym', 'warc-record-id': '<urn:uuid:44d32e62-4240-413a-9f8a-562fe27223c6>', 'warc-refers-to': '<urn:uuid:7d95741c-6974-427f-80f7-d08559f799aa>', 'warc-target-uri': 'https://kw.m.wikipedia.org/wiki/Kembra', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Kembra yw konna-tir menydhek yn Howlsedhes Breten Veur. Glow hag ' 'owr o poesek yn erbysieth Pow Kembra seulajydh, mes ...'} ``` #### deduplicated_ky * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 28946, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TVCYX44AC2J2TBVAYMQW62P4XYHWPSAH', 'warc-date': '2021-02-24T20:28:28Z', 'warc-identified-content-language': 'kir,eng', 'warc-record-id': '<urn:uuid:b0b897b8-5d55-4109-967f-9e368be6b7aa>', 'warc-refers-to': '<urn:uuid:b7ac5729-15cb-44c8-a0a2-096cb46cb1de>', 'warc-target-uri': 'http://mezgilnews.kg/tag/klip/', 'warc-type': 'conversion'}, 'nb_sentences': 6, 'offset': 0}, 'text': 'Мезгил. Ырчы Зерени соцтармактар аркылуу коркуткан белгисиз ' 'адамдарды милиция издеп баштады. Чүй облустук ИИБинин маа...'} ``` #### deduplicated_la * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2647, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:QXPYMWAXXOOHWKBNAYCNUODKWSB56XU4', 'warc-date': '2021-03-09T04:51:12Z', 'warc-identified-content-language': 'lat,eng', 'warc-record-id': '<urn:uuid:684bcdce-19ec-4a44-b814-949eb5ceff66>', 'warc-refers-to': '<urn:uuid:2cd40ddd-0087-41ba-8442-8b2b6b1bbcd2>', 'warc-target-uri': 'http://grhpay.es/index.php/about-us/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Nam libero tempore, cum soluta nobis est eligendi optio cumque ' 'nihil impedit quo minus id quod maxime placeat facere ...'} ``` #### deduplicated_lb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2060, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:5YXISU3T3UP7WKUDJ2W45OAKEFJ7ZD2T', 'warc-date': '2021-03-09T04:51:26Z', 'warc-identified-content-language': 'ltz', 'warc-record-id': '<urn:uuid:534e6ce8-782c-4813-9dfb-902736ffc141>', 'warc-refers-to': '<urn:uuid:5829843c-0428-4098-9213-52bb2fb319b2>', 'warc-target-uri': 'https://online-archive-extractor.com/lb/open-7z-file', 'warc-type': 'conversion'}, 'nb_sentences': 4, 'offset': 0}, 'text': 'Eis Online Archiv Extraiteren erlaabt Iech den Inhalt vu ' 'kompriméierten Archiven direkt aus Ärem Browser ze extrahier...'} ``` #### deduplicated_lez * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6238, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:4MMTYN2QRKUOUZESCUL3AOZJTMDM5YSY', 'warc-date': '2021-03-02T18:06:44Z', 'warc-identified-content-language': 'nno,eng', 'warc-record-id': '<urn:uuid:78581b3a-c21f-46a2-b168-bff6f147c337>', 'warc-refers-to': '<urn:uuid:02f1447d-0b61-4ad5-ac56-0f42c2438e6b>', 'warc-target-uri': 'https://lez.wikipedia.org/wiki/1877_%D0%B9%D0%B8%D1%81', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '1877 йис (са агъзурни муьжуьдвишни пудкъанницIеирид лагьай йис) — ' 'григорийдин чIаваргандал гьалтайла ислендиз эгечӀза...'} ``` #### deduplicated_li * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2199, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:IIZSY6KLHN5WSCCGU4NZ6K6WYLIMJP4I', 'warc-date': '2021-03-04T07:19:27Z', 'warc-identified-content-language': 'nld', 'warc-record-id': '<urn:uuid:c7eb18bb-ea03-43c2-a1e9-e8eb5b15e25b>', 'warc-refers-to': '<urn:uuid:486a5d06-6dd8-46d2-a93f-d798b8a5bd07>', 'warc-target-uri': 'https://li.m.wikipedia.org/wiki/Waterop', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': "Hoes Karsveld aan de Gulp sjtamp oet de 18e ièw. 't Kesjtièlechtig " "hoes ies van mergel mèt 'ne trapgevel. 't Ies gebo..."} ``` #### deduplicated_lmo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6553, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DAJPSPBN7BVZNRWANXQAW2KP6LQEWNUW', 'warc-date': '2021-03-04T10:49:45Z', 'warc-identified-content-language': None, 'warc-record-id': '<urn:uuid:d9452b27-9a95-47e9-8274-518138812f56>', 'warc-refers-to': '<urn:uuid:4ff4e796-c685-4c81-adc9-fecbd50e79cb>', 'warc-target-uri': 'https://lmo.wikipedia.org/wiki/Antrenas', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': "El sò teretóre el g'ha 'na superfìce de 17,55 km² e 'l và de 'na " "altèsa mìnima de 720 méter a 'na altèsa màsima de 11..."} ``` #### deduplicated_lo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 15036, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH', 'warc-date': '2021-03-09T04:48:08Z', 'warc-identified-content-language': 'eng,khm,lao', 'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>', 'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>', 'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ຂໍ້ຄວນໃສ່ໃຈ: ຖ້າເຈົ້າເວົ້າພາສາລາວໄດ້, ' 'ມີການບໍລິການຊ່ວຍເຫຼືອດ້ານພາສາໃຫ້ທ່ານໂດຍບໍ່ເສຍຄ່າ. ໂທ ຫາ ' 'ຝ່າຍບໍລິການສະ ມາ ຊິກທີ່...'} ``` #### deduplicated_lrc * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7958, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:GTR6WCXERTVUI5RIKHE7MC7LTACF7R2W', 'warc-date': '2021-03-01T04:48:39Z', 'warc-identified-content-language': 'fas,eng', 'warc-record-id': '<urn:uuid:7ba618e0-f09e-48c2-a0be-a1b77ba5678a>', 'warc-refers-to': '<urn:uuid:2e4504e7-46c9-4aaa-818f-3077c73f1d97>', 'warc-target-uri': 'http://www.shaya.me/2013/01/blog-post_3.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'یار یار یار یار یار یار یار یار یار یار یار یار یار یار یار یار یار ' 'یار یار یار یار یار یار یار یار یار'} ``` #### deduplicated_lt * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 221005, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:KSLULK6RGSIW43IBMSAEU4643LSRMW3V', 'warc-date': '2021-03-05T07:21:10Z', 'warc-identified-content-language': 'lit', 'warc-record-id': '<urn:uuid:fa6592a5-bc87-4683-88d6-37ce74af5058>', 'warc-refers-to': '<urn:uuid:d78122b4-90d8-4cdf-a205-579bcff9ec88>', 'warc-target-uri': 'https://apcis.ktu.edu/lt/site/katalogas?cat_id=132&type=2', 'warc-type': 'conversion'}, 'nb_sentences': 219, 'offset': 0}, 'text': 'Telšių apskritis – viena iš Lietuvos sričių, kuri turi ką parodyti ' 'pasauliui, ir iš to galima pasiekti didelės naudos...'} ``` #### deduplicated_lv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4036, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:NUB75CFJHUBI7HOED4HVCNHGQUIVCBO3', 'warc-date': '2021-03-09T03:46:31Z', 'warc-identified-content-language': 'lav,eng', 'warc-record-id': '<urn:uuid:9ad87feb-993f-45b9-bf1e-53a8185b3dc6>', 'warc-refers-to': '<urn:uuid:64eb85d8-c204-4cf8-a6c3-29760fe1f362>', 'warc-target-uri': 'http://igatesbaznica.lv/augupvrsta-stratijas-binr-opcijas.php', 'warc-type': 'conversion'}, 'nb_sentences': 10, 'offset': 0}, 'text': 'Latvijā šobrīd nav normatīvu aktu mājas un istabas dzīvnieku ' 'vairotāju regulēšanai, jo vairākums audzētāju savu nodar...'} ``` #### deduplicated_mai * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3632, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:OQRKDLTDWJCD37HVHGXYU7E3BXBR5NB3', 'warc-date': '2021-03-01T16:25:27Z', 'warc-identified-content-language': 'bih,hin,fra', 'warc-record-id': '<urn:uuid:da0cf739-4c6c-46d4-9c32-8e34a673fa26>', 'warc-refers-to': '<urn:uuid:0c39ca75-b871-431b-8c89-63d58ea0893f>', 'warc-target-uri': 'https://mai.m.wikipedia.org/wiki/%E0%A4%B0%E0%A4%BE%E0%A4%9C%E0%A4%A7%E0%A4%BE%E0%A4%A8%E0%A5%80', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'शब्द राजधानी संस्कृत सँ आएल अछि । राजधानी आम तौर पर सङ्घटक क्षेत्रक ' 'सब सँ पैग सहर होएत अछि मुदा ई जरुरी नै अछि ।[१]'} ``` #### deduplicated_mg * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2714, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:OGAHJNKN3OSLXYKJKK2LQAFKAEM67DFQ', 'warc-date': '2021-03-03T15:32:59Z', 'warc-identified-content-language': 'mlg,nno', 'warc-record-id': '<urn:uuid:f5a6492f-29c4-4de9-baaa-12edb86d89cd>', 'warc-refers-to': '<urn:uuid:970362fe-4102-481e-8f4b-db5f3e8ce4db>', 'warc-target-uri': 'https://mg.wikipedia.org/wiki/Barro_Alto_(Bahia)', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': "I Barro Alto (Bahia) dia kaominina ao Brazila, ao amin'i Bahia, ao " "amin'i Centro-Norte Baiano, Irecê.\n" 'Ny velarantanin...'} ``` #### deduplicated_mhr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 27685, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:YJYVG5XEYRKALEYIO5PCK34QFNUO3JRD', 'warc-date': '2021-03-06T17:12:45Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:3405f528-672f-449c-a2a3-cfa73f5d17b0>', 'warc-refers-to': '<urn:uuid:dfe46be9-656c-4b02-9384-fd1e75987a15>', 'warc-target-uri': 'http://marisong.ru/mar/kalendar', 'warc-type': 'conversion'}, 'nb_sentences': 31, 'offset': 0}, 'text': '1982 — 1985 ийлаште — Палантай лӱмеш музыкальный училищыште баян ' 'дене отделенийыште шинчымашым налын.\n' 'Тыгак шуко жап ...'} ``` #### deduplicated_min * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4309, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XV23LOBECSVNRXJ2NJTCZVJXOCVQ3BBR', 'warc-date': '2021-03-08T22:10:36Z', 'warc-identified-content-language': 'eng,spa', 'warc-record-id': '<urn:uuid:fdaddf50-1986-44b3-b84b-d9a5d0fa27f1>', 'warc-refers-to': '<urn:uuid:257f7969-3a19-42d6-ae1a-ddb5c0486bb8>', 'warc-target-uri': 'https://cookingwithmydoctor.com/?LOSS=danger-of-keto-diet%2F', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f\u200e ' '\u200e\u200f\u200f\u200e \u200e\u200f\u200f...'} ``` #### deduplicated_mk * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 22483, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:SGEJ6O6XOEVCQXKXT2XRSRBOSH3ZDSVJ', 'warc-date': '2021-03-02T05:16:16Z', 'warc-identified-content-language': 'mkd,srp,eng', 'warc-record-id': '<urn:uuid:168d1661-a73f-4687-a614-e8cecf7a70a0>', 'warc-refers-to': '<urn:uuid:a61ec44e-a4c1-4b8e-837c-7adc80e853e2>', 'warc-target-uri': 'http://zenica.mk/2018/02/10/tri-dena-kultura-vo-karev-festival/', 'warc-type': 'conversion'}, 'nb_sentences': 4, 'offset': 0}, 'text': '„Три дена културa“ е настан кој ќе се одржи од 21-23 февруари ' '(среда, четврток и петок, 20:00ч.) во гимназијата „Нико...'} ``` #### deduplicated_ml * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 20202, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:ZOEIO7AIEAGDR2S6TOZYZOAQDOV6QJUE', 'warc-date': '2021-03-08T00:10:05Z', 'warc-identified-content-language': 'mal,eng', 'warc-record-id': '<urn:uuid:f19a2925-0064-47e2-9ec9-48b2786657bd>', 'warc-refers-to': '<urn:uuid:20c7b8fd-1909-480f-b36c-89cd1d0ee3c4>', 'warc-target-uri': 'https://boolokam.com/what-to-do-for-police-clearance-conduct-certificate-in-uae/227247', 'warc-type': 'conversion'}, 'nb_sentences': 12, 'offset': 0}, 'text': 'രണ്ടുപേര്\u200d തമ്മിലുള്ള സ്നേഹ ബന്ധം അവര്\u200dക്കിടയില്\u200d ' 'പൊതുവായി കാണപ്പെടുന്ന മൂല്യങ്ങളുടെ അടിസ്ഥാനത്തില്\u200d ' 'ആയിരിക്കും.\n' 'ഒരുവ...'} ``` #### deduplicated_mn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 5616, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:ILMC56UA63RNTABOJTVMUJQJHMKKC6QR', 'warc-date': '2021-03-09T04:20:37Z', 'warc-identified-content-language': 'mon,ell', 'warc-record-id': '<urn:uuid:07697b69-9e58-4e84-bc0e-a536bcc1ae11>', 'warc-refers-to': '<urn:uuid:704af2f1-3094-45dc-a1c5-63bd08d53069>', 'warc-target-uri': 'http://mn.uncyclopedia.info/index.php?title=%D0%A5%D1%8D%D1%80%D1%8D%D0%B3%D0%BB%D1%8D%D0%B3%D1%87:Mongol_Emperor&action=edit', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': 'Анциклопедиа-д оруулсан бүх хувь нэмэр Creative Commons ' 'Attribution-NonCommercial-ShareAlike-н хувьд (дэлгэрэнгүй мэд...'} ``` #### deduplicated_mr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 11373, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:V3PQES342QGJGRFZ6QMXNB6RIX2ST3V5', 'warc-date': '2021-03-09T05:01:31Z', 'warc-identified-content-language': 'mar,eng', 'warc-record-id': '<urn:uuid:b96cf6ee-7cda-4a7a-9364-08b51284a05e>', 'warc-refers-to': '<urn:uuid:92e533ed-c2c7-4ac7-9b17-af780a503ce6>', 'warc-target-uri': 'https://marathi.thewire.in/devangana-kalita-uapa-bail-rejected-natasha-narwal', 'warc-type': 'conversion'}, 'nb_sentences': 9, 'offset': 0}, 'text': 'पुण्यातील कार्यक्रमांना स्थगिती:पुण्यातील अनेक सांस्कृतिक नियोजित ' 'कार्यक्रमांना स्थगिती, कोरोनाच्या वाढत्या रुग्णांमु...'} ``` #### deduplicated_mrj * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3492, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:7B242FKI45QVEGJQTF46YCRFYMYW6YFG', 'warc-date': '2021-03-03T05:03:02Z', 'warc-identified-content-language': 'eng', 'warc-record-id': '<urn:uuid:bd7d5682-be60-4a00-9781-29b03a87b30e>', 'warc-refers-to': '<urn:uuid:49641a15-2834-4a72-a011-fdc9cd7273c7>', 'warc-target-uri': 'https://mrj.wikipedia.org/wiki/%D0%91%D0%B0%D1%80%D0%BA%D0%B5%D1%80%D0%B8', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Баркери (латинлӓ Barkeria) – Орхидейвлӓ (Orchidaceae) йыхыш пырышы ' 'пеледшӹ кушкыш. Америкышты вӓшлиӓлтеш. Цилӓжӹ 15 й...'} ``` #### deduplicated_ms * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7939, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:7BWXR4LQ6O2IBJLKLKWJKHTF3JBXB26T', 'warc-date': '2021-03-09T05:38:44Z', 'warc-identified-content-language': 'msa,eng', 'warc-record-id': '<urn:uuid:35a9d91c-3a64-4748-b135-3c467bfa403f>', 'warc-refers-to': '<urn:uuid:9cf4de91-0523-4327-9fcb-5c8f99956da0>', 'warc-target-uri': 'https://kheru2006.livejournal.com/1665383.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Bagaimanapun beliau memiliki satu lagi pandangan iaitu perkara ' 'paling bodoh seseorang boleh lakukan ialah menjangka d...'} ``` #### deduplicated_mt * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 98714, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:HC75UY5ZHRC3AY4C2VHFR4JADUM2AZBH', 'warc-date': '2021-03-09T04:29:23Z', 'warc-identified-content-language': 'eng,mlt', 'warc-record-id': '<urn:uuid:45dec17d-a638-454e-a136-c45579517b53>', 'warc-refers-to': '<urn:uuid:c82d8d7c-05b6-43d8-be17-5072323aab01>', 'warc-target-uri': 'https://carmelcacopardo.wordpress.com/2015/07/28/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Kemmuna hi protetta bħala sit Natura 2000. Imma ma nistgħux ' 'neskludu logħob tas-soltu biex iduru ma din il-protezzjon...'} ``` #### deduplicated_mwl * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 11598, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2A22BTIRZ4E5FI2FCG7AUCWJQTY2J4ST', 'warc-date': '2021-02-26T13:58:26Z', 'warc-identified-content-language': None, 'warc-record-id': '<urn:uuid:73a60756-1664-410f-bf62-ab44c88c074f>', 'warc-refers-to': '<urn:uuid:800d3642-449d-4be0-817c-edc7fb64c1b4>', 'warc-target-uri': 'https://mwl.wikipedia.org/wiki/R%C3%A1dio_(quemunica%C3%A7on)', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'La radioquemunicaçon ye un meio de quemunicaçon por trascepçon de ' 'anformaçon, podendo ser rializada por Radiaçon eile...'} ``` #### deduplicated_my * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 237288, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:U2QEC6RSZR5UW5LXTNN6QRD47FHVYVJY', 'warc-date': '2021-02-27T06:07:58Z', 'warc-identified-content-language': 'mya,eng', 'warc-record-id': '<urn:uuid:817de4f8-0b7a-446e-bae2-8436019dd34f>', 'warc-refers-to': '<urn:uuid:b364cc33-c1bf-4adb-8317-1aad1cfd4aa0>', 'warc-target-uri': 'http://www.pnsjapan.org/2010/05/', 'warc-type': 'conversion'}, 'nb_sentences': 248, 'offset': 0}, 'text': 'စတိုင္လည္းက် စမတ္လည္းက်တဲ့ ေန႔စဥ္ လႈပ္ရွားမႈဘဝေလးေတြကို ' 'ပိုင္ဆိုင္ႏိုင္ဖို႔အတြက္ Samsung ကေန မၾကာေသးခင္က ထုတ္လုပ္လိုက...'} ``` #### deduplicated_myv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 11091, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:IFCGUVXSCYHEFYLUVOQ5QMGJWYL2CTVJ', 'warc-date': '2021-03-02T21:05:00Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:ea77b8a6-e394-48c1-b865-3cea87e7b906>', 'warc-refers-to': '<urn:uuid:a4927904-4e3c-4f22-858a-adad9bbb1e63>', 'warc-target-uri': 'https://ru.m.wikinews.org/wiki/%D0%9E%D0%BC%D0%B1%D0%BE%D0%BC%D0%B0%D1%81%D1%82%D0%BE%D1%80%D1%81%D0%BE_%C2%AB%D0%90%D0%B7%D0%BE%D1%80%C2%BB_%D1%8D%D1%80%D0%B7%D1%8F%D0%BD%D1%8C_%D1%8D%D1%80%D1%8F%D0%BC%D0%B0%D1%80%D1%82%D0%BE%D0%BD%D1%82%D1%8C_%D0%B2%D0%B0%D1%81%D0%B5%D0%BD%D1%86%D0%B5_%D0%BD%D0%B5%D0%B2%D1%82%D0%B5%D0%BC%D0%B0%D1%81%D1%8C_%D1%8E%D1%82%D1%8B_%D0%A1%D1%83%D0%BE%D0%BC%D0%B8%D1%81%D1%81%D1%8D', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '«Азор» — васенце эрзянь кельсэ артонь эриванмо-фильманть теемстэ. ' 'Орданьбуень Баеньбуе веле, Мордовиясо.'} ``` #### deduplicated_mzn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6193, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:QVLHP3APVA34EQ4YFDRJWF2ODTQZ3QG6', 'warc-date': '2021-03-08T00:11:58Z', 'warc-identified-content-language': 'fas', 'warc-record-id': '<urn:uuid:c86dfe2b-795d-4e5d-aaa0-75c1e98690a6>', 'warc-refers-to': '<urn:uuid:b6258701-626d-4a7c-b79e-1c526f9892a6>', 'warc-target-uri': 'https://mzn.wikipedia.org/wiki/%D8%A7%D9%88%D8%B3%D9%88%DA%A9%DB%8C%D8%8C_%D8%A7%D9%88%D8%A6%DB%8C%D8%AA%D8%A7', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'اوسوکی اتا شهر نوم هسته که جاپون ِاوئیتا استان دله دره. ونه جمعیت ' 'ره سال ۲۰۰۸ گادِر ۴۲٬۴۶۴ نفر اعلام هاکاردنه. این شه...'} ``` #### deduplicated_nah * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2517, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DSXC3C7F2LUL47USAV5ZRT4HMVQ4XGUI', 'warc-date': '2021-03-03T14:32:16Z', 'warc-identified-content-language': 'spa,ell', 'warc-record-id': '<urn:uuid:a305013e-01ba-49a3-89b9-027dc622576f>', 'warc-refers-to': '<urn:uuid:073b9e5a-a0d3-41c3-89bd-8f972b6a4154>', 'warc-target-uri': 'https://nah.wikipedia.org/wiki/%CF%98', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Ϙ ītōcā inic cē huēhuehtlahtōl īpan ' 'greciamachiyōtlahtōltecpantiliztli. Ītlahtōl nō ic 90 tlapōhualli.'} ``` #### deduplicated_nap * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2331, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:EXGUINJCGD2K4E2IVQNJJAQLS4UDJ2TG', 'warc-date': '2021-03-07T13:12:47Z', 'warc-identified-content-language': 'cos,srp,lav', 'warc-record-id': '<urn:uuid:7362689d-31bc-492d-8e60-851c963b5313>', 'warc-refers-to': '<urn:uuid:ecd1bb5f-d247-4739-b9e9-4f93d46081d6>', 'warc-target-uri': 'https://nap.wikipedia.org/wiki/Priatorio', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': "'Int'ô cattolicesimo, priatorio è 'o pruciesso 'e purefecazzione 'e " "ll'aneme ca moreno 'into ll'amicizzia 'e Dio ma n..."} ``` #### deduplicated_nds * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 5066, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:G2O2EJZLTIU5IDSXMYHPP3TMXVXMAZ3P', 'warc-date': '2021-03-08T22:13:48Z', 'warc-identified-content-language': 'nno,srp', 'warc-record-id': '<urn:uuid:d7f0c9a0-9c12-4d9a-ae5a-184bf7b59c5d>', 'warc-refers-to': '<urn:uuid:31f4d793-f3a4-4403-9c1f-a52f878b63c8>', 'warc-target-uri': 'https://nds.wikipedia.org/wiki/1763', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '7. Oktober: In London geiht en königliche Proklamatschoon rut, dat ' 'vun nu af an in de Kolonien vun Amerika de Kamm vu...'} ``` #### deduplicated_ne * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 17723, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:AZ2CUDZ672TVV2R3O643TJAX7JGXASP2', 'warc-date': '2021-03-08T22:24:08Z', 'warc-identified-content-language': 'nep', 'warc-record-id': '<urn:uuid:fa642413-904a-4def-86fc-a4889e5e9e71>', 'warc-refers-to': '<urn:uuid:f7caed4f-c5ae-4f55-944a-1f06ed71e438>', 'warc-target-uri': 'https://postpati.com/2017/26/07/1353', 'warc-type': 'conversion'}, 'nb_sentences': 9, 'offset': 0}, 'text': 'युएइको दूतावास बिरुद्द युएइमा रहेका संघ संस्थाहरु द्वारा निरन्तर ' 'दवाव आउने क्रमजारि रहेको छ। नेकपा माओबादी सम्बद्ध रह...'} ``` #### deduplicated_new * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2388, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:E6YZSKQK57PDBRG7VPE64CGOL3N4D63I', 'warc-date': '2021-03-09T04:24:48Z', 'warc-identified-content-language': 'nep,eng,bih', 'warc-record-id': '<urn:uuid:20692995-9d67-4b05-ba9b-9dbac80b4441>', 'warc-refers-to': '<urn:uuid:a8445a70-117a-42c1-89ca-aa5df0cc5616>', 'warc-target-uri': 'https://new.wikipedia.org/wiki/%E0%A4%A7%E0%A4%BE%E0%A4%AA%E0%A4%BE', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'धापा (अंग्रेजी भाय:Dhapa), नेपायागु कर्णाली अञ्चलयागु जुम्ला ' 'जिल्लायागु गाँ विकास समिति खः। थ्व थासे231खा छेँ दु।'} ``` #### deduplicated_nl * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 766978, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:77YAXN3F4IGI2CYBM3IESJRTCIB4WY2F', 'warc-date': '2021-02-25T16:49:18Z', 'warc-identified-content-language': 'nld', 'warc-record-id': '<urn:uuid:0b08e51a-1b82-4fb9-a420-8556f2fb47a3>', 'warc-refers-to': '<urn:uuid:dae7ca23-9b7e-45d1-9a1c-604942af8cb9>', 'warc-target-uri': 'https://www.delpher.nl/nl/tijdschriften/view?identifier=MMUBA13:001691001:00689&coll=dts', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '1 Deze Duitse hond is nauw verwant aan de Duitse Brak, de ' 'Westfaalse Dasbrak werd gefokt om op dieren te jagen, zoals...'} ``` #### deduplicated_nn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2770, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FLRYPK225URFXO3IG4LP6D5TI2WW7MNU', 'warc-date': '2021-03-09T03:50:05Z', 'warc-identified-content-language': 'nno', 'warc-record-id': '<urn:uuid:de821d19-abed-4a35-9284-91176a5428b9>', 'warc-refers-to': '<urn:uuid:7ed9913e-e7dd-496f-b0ef-e82098dd53ca>', 'warc-target-uri': 'https://www.avisa-hordaland.no/trafikk/tunell-pa-e16-stengd-2/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Bilføraren som vart stogga på E16 i helga hadde 2,28 i promille: – ' 'Han var ikkje i stand til å ta vare på seg sjølv'} ``` #### deduplicated_no * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1329, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:G7JC2T5AD4YK4WWFGTYHHGP5VHB6M7KU', 'warc-date': '2021-03-08T13:17:52Z', 'warc-identified-content-language': 'nor', 'warc-record-id': '<urn:uuid:9e215de3-f988-4754-9ef5-6370121b9b5e>', 'warc-refers-to': '<urn:uuid:1facfcb5-da68-4122-9257-102271944050>', 'warc-target-uri': 'https://www.miljoindex.no/781825/nexans-norway-hovedkontor/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Utvikling, produksjon og markedsføring av kabler og ' 'kablingssystemer, samt annen tilknyttet virksomhet, herunder del...'} ``` #### deduplicated_oc * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 20117, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2XDHRCL2CSS7YFAM2IAGQL6CSJJEQDXI', 'warc-date': '2021-03-03T15:40:21Z', 'warc-identified-content-language': 'oci', 'warc-record-id': '<urn:uuid:c9ebdec5-af68-4756-88c8-1df831621c5b>', 'warc-refers-to': '<urn:uuid:199db451-0e6f-4f75-ad81-2e7612295452>', 'warc-target-uri': 'https://oc.wikipedia.org/wiki/2', 'warc-type': 'conversion'}, 'nb_sentences': 18, 'offset': 0}, 'text': "8 : dins l'Empèri Part, assassinat dau rèi Orodes III, probablament " 'en causa de son autoritarisme, que foguèt remplaç...'} ``` #### deduplicated_or * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 12859, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:KQDIT6NHKBV43F56DTHTM5ZS3GHJT5SY', 'warc-date': '2021-03-09T05:25:21Z', 'warc-identified-content-language': 'ori,eng', 'warc-record-id': '<urn:uuid:e25e33da-92c5-42d6-aef8-c3465855312a>', 'warc-refers-to': '<urn:uuid:7457ac60-4aae-44ad-aaec-314795ea0708>', 'warc-target-uri': 'https://or.wikipedia.org/wiki/%E0%AC%A6%E0%AD%8D%E0%AD%B1%E0%AC%BF%E0%AC%A4%E0%AD%80%E0%AD%9F_%E0%AC%AC%E0%AC%BF%E0%AC%B6%E0%AD%8D%E0%AD%B1%E0%AC%AF%E0%AD%81%E0%AC%A6%E0%AD%8D%E0%AC%A7', 'warc-type': 'conversion'}, 'nb_sentences': 3, 'offset': 0}, 'text': 'ଇଉରୋପ, ପ୍ରଶାନ୍ତ ମହାସାଗର, ଆଟଲାଣ୍ଟିକ ମହାସାଗର, ଦକ୍ଷିଣ-ପୂର୍ବ ଏସିଆ, ଚୀନ, ' 'ମଧ୍ୟପ୍ରାଚ୍ୟ, ଭୂମଧ୍ୟସାଗର, ଉତ୍ତର ଆଫ୍ରିକା, ପୂର୍ବ ଆଫ୍...'} ``` #### deduplicated_os * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7079, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:N7CKDF6E3SJBINW4SR6LIUNKLIJP2ROL', 'warc-date': '2021-03-08T22:01:32Z', 'warc-identified-content-language': 'nno', 'warc-record-id': '<urn:uuid:4cd86a68-815b-4539-84a8-bab850034e60>', 'warc-refers-to': '<urn:uuid:8774fb5e-b7fb-4feb-85e7-8c7b33f5980b>', 'warc-target-uri': 'https://os.wikipedia.org/wiki/%D0%9F%D1%83%D1%88%D0%BA%D0%B8%D0%BD,_%D0%A1%D0%B5%D1%80%D0%B3%D0%B5%D0%B9%D1%8B_%D1%84%D1%8B%D1%80%D1%82_%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B0%D0%BD%D0%B4%D1%80', 'warc-type': 'conversion'}, 'nb_sentences': 4, 'offset': 0}, 'text': 'Пушкин Александр Сергейы фырт (уырыс. Александр Сергеевич Пушкин; ' 'райгуырдис 1799 азы 6 июны Мæскуыйы — амардис 1837 ...'} ``` #### deduplicated_pa * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3990, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:HBYN5XY3CD2KI4XIWMBJYPSV2ZPNBWUN', 'warc-date': '2021-03-09T05:05:20Z', 'warc-identified-content-language': 'pan,eng', 'warc-record-id': '<urn:uuid:1ac5c8d1-e750-492e-b35e-b9780bfd16fd>', 'warc-refers-to': '<urn:uuid:b4d8f997-8c9a-43cf-b16c-e8a77c209062>', 'warc-target-uri': 'https://pa.nhp.gov.in/Detail/getdirection?url=radha-krishna-nurshing-andmat-home-rae_bareli-uttar_pradesh', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ਇਹ ਪੋਰਟਲ ਰਾਸ਼ਟਰੀ ਸਿਹਤ ਪੋਰਟਲ ਦੇ ਸਿਹਤ ਸੂਚਨਾ ਕੇਂਦਰ (CHI) ਦੁਆਰਾ ਵਿਕਸਿਤ ' 'ਤੇ ਤਿਆਰ ਕੀਤਾ ਗਿਆ ਹੈ ਅਤੇ ਸਿਹਤ ਤੇ ਪਰਿਵਾਰ ਭਲਾਈ ਮੰਤਰਾਲੇ...'} ``` #### deduplicated_pam * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4615, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:WOAFTI75LXN3LAF6WFDRDHITPU33CZRK', 'warc-date': '2021-03-07T22:02:39Z', 'warc-identified-content-language': 'eng', 'warc-record-id': '<urn:uuid:9d7a202a-0fec-4aac-9921-2ebf5aa7f9a2>', 'warc-refers-to': '<urn:uuid:70b6a707-77b1-4a0f-84e6-d75ed8d729ad>', 'warc-target-uri': 'https://toddlers.me/kpai-sarankan-gading-beri-penguatan-psikologi-untuk-gempi/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '“Káláu Gádìng tìdák mámpu melákukán ìtu, yá bìsá mìntá tolong ' 'kepádá oráng yáng berkompeten, mìsálnyá psìkolog átáu s...'} ``` #### deduplicated_pl * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 51849, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:25YENUTK4YA3ZYGCWQH5Z6YDINCMI6SI', 'warc-date': '2021-03-05T22:43:01Z', 'warc-identified-content-language': 'pol', 'warc-record-id': '<urn:uuid:753116b6-f680-448d-ae8a-8fc88ce061b1>', 'warc-refers-to': '<urn:uuid:926693c4-5b59-4f50-98b9-787576fc71d7>', 'warc-target-uri': 'https://igraszki-jezykowe.pl/category/tips-and-tricks-metodyka/', 'warc-type': 'conversion'}, 'nb_sentences': 60, 'offset': 0}, 'text': 'W niedzielę, 12 czerwca w Orlando na Florydzie islamski terrorysta, ' 'powiązany z ISIS zastrzelił 50 osób i drugie tyle...'} ``` #### deduplicated_pms * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2620, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2T5H5XDLC3KPDB33XXVCTGNNYYDJXQWQ', 'warc-date': '2021-03-03T16:04:55Z', 'warc-identified-content-language': 'srp', 'warc-record-id': '<urn:uuid:952c2dda-041e-40ff-bf28-8a39075f53d9>', 'warc-refers-to': '<urn:uuid:6d526022-b736-4a51-9b9c-c5bdd5a546f9>', 'warc-target-uri': 'https://pms.wikipedia.org/wiki/Auer', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': "Auer (Ora për j'italian) a l'é un comun ëd 3.025 abitant dla " 'provincia ëd Bolsan (Region Autònoma Trentin-Sud Tiròl)....'} ``` #### deduplicated_pnb * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2896, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:GWWDSJAQDB7JDQWV65CI6WT7E6C33DL4', 'warc-date': '2021-03-08T23:01:08Z', 'warc-identified-content-language': 'urd', 'warc-record-id': '<urn:uuid:8c385ca8-7561-4f47-b5a3-0862488eb948>', 'warc-refers-to': '<urn:uuid:837d621d-3540-44fd-a4d0-6cb3c6f2327f>', 'warc-target-uri': 'https://pnb.wikipedia.org/wiki/453%DA%BE', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'لکھت کریئیٹیو کامنز انتساب/ اکوجہے-شراکت لائسنس دے ہیٹھ دستیاب اے، ' 'ہور شرطاں وی لاگو ہوسکدیاں نیں۔ ویروے لئی ورتن شرط...'} ``` #### deduplicated_ps * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2424, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CAUU5Y7TOTASV7WYKCYRCVXTZ7GGN2VO', 'warc-date': '2021-03-09T05:08:35Z', 'warc-identified-content-language': 'pus', 'warc-record-id': '<urn:uuid:d784cf7a-91e1-4c54-96a2-e41c67318548>', 'warc-refers-to': '<urn:uuid:98aed7d2-c3e3-4039-af83-f2c73a5c19f5>', 'warc-target-uri': 'https://www.mashaalradio.com/a/29821043.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'د افغانستان په فاریاب ولایت کې په یوه پارک کې ښځو په برقعو کې ورزش ' 'کړی دی. د سیمې چارواکي وايي، د ښځو د ورزش لپاره ځا...'} ``` #### deduplicated_pt * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 79931, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:JYDP4XMEGW2XPPV6NAAF772KDH4X2CCF', 'warc-date': '2021-02-25T13:48:41Z', 'warc-identified-content-language': 'por', 'warc-record-id': '<urn:uuid:3b50f546-e03b-461f-98c8-5a38920d7c0a>', 'warc-refers-to': '<urn:uuid:564bfb21-0705-4997-bbb9-472f0cbcad3e>', 'warc-target-uri': 'http://www.artefazparte.com/', 'warc-type': 'conversion'}, 'nb_sentences': 117, 'offset': 0}, 'text': 'A reflexão sobre identidade de género anda a cansar muitos de nós. ' 'Sobretudo os que não têm dúvidas e nela se sentem ...'} ``` #### deduplicated_qu * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2630, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:34TX2UNXR2JLRLAFTE3ILOBMEBRMWIRH', 'warc-date': '2021-03-09T05:23:48Z', 'warc-identified-content-language': 'que', 'warc-record-id': '<urn:uuid:237398f6-a300-449b-9e09-7a1ed8cf1e97>', 'warc-refers-to': '<urn:uuid:84b20aab-d538-4efc-bc97-33d546d84802>', 'warc-target-uri': 'https://qu.wikipedia.org/wiki/Sapaq:HukchasqaTinkimuq/Chinchay_Chungcheong_pruwinsya', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': "Kay sapaq p'anqaqa t'inkisqa p'anqakunapi ñaqha hukchasqakunatam " "rikuchin. Watiqasqayki p'anqakunaqa yanasapa qillqas..."} ``` #### deduplicated_rm * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 100558, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:Z7R6QV2K5FDIHR4QJH7F2NTXND6NDEFY', 'warc-date': '2021-02-27T13:53:32Z', 'warc-identified-content-language': 'deu', 'warc-record-id': '<urn:uuid:da3aec28-6c61-470c-a5d2-66710bc1fb35>', 'warc-refers-to': '<urn:uuid:9d04f371-89a7-4ac2-9b1e-883aa93e4ace>', 'warc-target-uri': 'http://lexbrowser.provinz.bz.it/doc/la/lp-2009-5/lege_provinzialadi_28_de_set_mber_dl_2009_n_5.aspx?view=1', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '(2) La prestaziun dla garanzia é sotmetüda al’aprovaziun di decunć ' 'finanziars da pert dl’aministraziun dl consorz.'} ``` #### deduplicated_ro * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1677, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DXKBGKXVETQLCHTHRMLLSWUPXTDNJDVV', 'warc-date': '2021-02-26T12:19:49Z', 'warc-identified-content-language': 'ron', 'warc-record-id': '<urn:uuid:2c20c06f-ca98-4118-9222-7b3b74bc760b>', 'warc-refers-to': '<urn:uuid:e77c028a-5857-4ec2-90db-58a9bb57c510>', 'warc-target-uri': 'https://ro.visafoto.com/es-visa-photo', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Căluşarii sau Boristenii, melodie culeasă din Braşov, în 1832, de ' 'Canzler cav. de Ferio şi publicată târziu de Otto H...'} ``` #### deduplicated_ru * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 14025, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2HSXIFOHEJZOTJV2EVDSZDVF26ATVATE', 'warc-date': '2021-03-07T02:45:16Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:aa9b3fc9-fb66-45fa-a064-62ae5fd67970>', 'warc-refers-to': '<urn:uuid:e9145f1e-4ce5-44db-a7d7-234842b31973>', 'warc-target-uri': 'http://budzdorov-kaluga.ru/statyi_i_materialy/o-grippe', 'warc-type': 'conversion'}, 'nb_sentences': 15, 'offset': 0}, 'text': '«Геро́й» (кит. 英雄) — исторический фильм режиссёра Чжана Имоу, ' 'снятый в 2002 году. Продолжительность — 93 минуты (суще...'} ``` #### deduplicated_rue * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 17472, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:YBMO2PR3WF7WQ7UEU5YLRBI7BZ6IP6KB', 'warc-date': '2021-03-06T15:24:27Z', 'warc-identified-content-language': 'ukr,rus', 'warc-record-id': '<urn:uuid:ca71a8fe-adb9-4346-a5b4-7d283f1410f8>', 'warc-refers-to': '<urn:uuid:a609d9f9-5040-4ca5-80a8-aa2c4c7a3525>', 'warc-target-uri': 'https://rue.wikipedia.org/wiki/%D0%9F%D0%BE%D0%BC%D1%96%D1%87:%D0%9A%D0%B0%D1%82%D0%B5%D2%91%D0%BE%D1%80%D1%96%D1%97', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Наприклад можете едітовати Катеґорія:Фізіци і додати одказ ' '[[Катеґорія:Фізіка]]. Катеґорія Фізіци буде пікатеґоріёв к...'} ``` #### deduplicated_sa * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 4166, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:ACZ66HH67HYSPS6I7YYQX64HRD4O5GIH', 'warc-date': '2021-02-24T20:35:30Z', 'warc-identified-content-language': 'san,eng', 'warc-record-id': '<urn:uuid:12bc2393-cb9b-492d-9398-f6b1090bd999>', 'warc-refers-to': '<urn:uuid:6e883bd6-350e-4280-94dc-ee84f44d2458>', 'warc-target-uri': 'https://sa.wikipedia.org/wiki/%E0%A4%B5%E0%A4%BF%E0%A4%B6%E0%A5%87%E0%A4%B7%E0%A4%83:%E0%A4%95%E0%A4%BF%E0%A4%AE%E0%A4%A4%E0%A5%8D%E0%A4%B0_%E0%A4%B8%E0%A4%81%E0%A4%B2%E0%A5%8D%E0%A4%B2%E0%A4%97%E0%A5%8D%E0%A4%A8%E0%A4%AE%E0%A5%8D/%E0%A4%B5%E0%A4%B0%E0%A5%8D%E0%A4%97%E0%A4%83:%E0%A5%A9%E0%A5%AC%E0%A5%A7', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'केभ्यः पृष्ठेभ्यः सम्बद्धम् पृष्ठम्: नामाकाशः : सर्वाणि (मुख्यम्) ' 'सम्भाषणम् सदस्यः सदस्यसम्भाषणम् विकिपीडिया विकिपीडि...'} ``` #### deduplicated_sah * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1724, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:5PKOMLENZCNOU6PT27NCNKTQFPRC37RQ', 'warc-date': '2021-03-03T15:19:03Z', 'warc-identified-content-language': 'ukr,rus', 'warc-record-id': '<urn:uuid:59b7bbeb-e375-4d8c-8b7c-fbe09e5ce21e>', 'warc-refers-to': '<urn:uuid:512d4df0-bd91-47aa-8f23-eb2a8d4b426e>', 'warc-target-uri': 'https://sah.m.wikipedia.org/wiki/%D0%A7%D0%B5%D1%80%D0%BD%D0%B8%D0%B3%D0%BE%D0%B2_%D1%83%D0%BE%D0%B1%D0%B0%D0%BB%D0%B0%D2%BB%D0%B0', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Тиэкис Creative Commons Attribution-ShareAlike лиссиэнсийэ ' 'усулуобуйатынан тарҕанар, сорох түбэлтэҕэ эбии көрдөбүллэр...'} ``` #### deduplicated_scn * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3622, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VGCXGU3B2WY722G2LRJ56RSYT4HSLUGI', 'warc-date': '2021-03-03T02:35:42Z', 'warc-identified-content-language': 'cos,ita', 'warc-record-id': '<urn:uuid:caeb7ba3-1bc2-4ef7-95cb-eb0d4d0792d6>', 'warc-refers-to': '<urn:uuid:19e33395-5981-4f6d-857b-12cf7d761b58>', 'warc-target-uri': 'https://scn.wikipedia.org/wiki/Canali_d%C3%A2_M%C3%A0nica', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Lu ripartu francisi dâ Mànica, chi cumprenni la pinìsula dû ' 'Cotentin, chi si nesci ntô canali, pigghia lu sò nomu dû ...'} ``` #### deduplicated_sco * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 140370, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TRXAEE4XHP7FT4FCJF3DSEKD7YBPCFOR', 'warc-date': '2021-03-02T07:33:12Z', 'warc-identified-content-language': 'eng,vol', 'warc-record-id': '<urn:uuid:d406a6c9-dba6-4955-8ede-f8082f7da58f>', 'warc-refers-to': '<urn:uuid:155919e0-a689-415c-b2aa-eccd06021476>', 'warc-target-uri': 'https://baggato.com/fo', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'fowjo fowjp fowjq fowjr fowka fowkb fowkc fowkd fowke fowkf fowkg ' 'fowkh fowki fowkj fowkk fowkl fowkm fowkn fowko fow...'} ``` #### deduplicated_sd * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 17619, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DLWVP7WGNP64RB6ZLHDNQEJ7D24BYXOR', 'warc-date': '2021-02-24T20:04:37Z', 'warc-identified-content-language': 'snd,eng', 'warc-record-id': '<urn:uuid:8997e1c6-4d72-47f1-bffe-d18a00ae6b94>', 'warc-refers-to': '<urn:uuid:946e892e-46c3-4a68-8532-1eac8b65b76a>', 'warc-target-uri': 'https://sd.info-4all.ru/%D8%B1%D8%AA%D9%88%D9%BD%D9%88-%D8%A2%D8%A6%D9%8A%D8%B1%D8%B1%D8%A7/%DA%AA%D9%84%D8%A7%DA%AA/', 'warc-type': 'conversion'}, 'nb_sentences': 21, 'offset': 0}, 'text': 'بيلففيل ڪيئن ٿيو؟ پهرين توهان کي پنهنجو ضمير وڃائڻ جي ضرورت آهي. ' 'اهي تعليم کان سواءِ صرف سست ماڻهو نه وٺندا آهن ، پر ...'} ``` #### deduplicated_sh * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 12517, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:IH6O64JAV4PLXURRD5LKU6C46DGGXS27', 'warc-date': '2021-03-09T06:06:53Z', 'warc-identified-content-language': 'fra,hrv,eng', 'warc-record-id': '<urn:uuid:ddc0f982-aea2-4206-a431-02e6c89ab090>', 'warc-refers-to': '<urn:uuid:904a206d-515a-4f11-ad25-9035adbf0cfa>', 'warc-target-uri': 'https://sh.wikipedia.org/wiki/Cliponville', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Po podacima iz 1999. godine u opštini je živelo 245 stanovnika, a ' 'gustina naseljenosti je iznosila 33 stanovnika/km²....'} ``` #### deduplicated_si * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 18426, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CZO426HASJ2VV5IMXEAHY2T53ZTDOZEP', 'warc-date': '2021-02-24T20:38:23Z', 'warc-identified-content-language': 'sin,eng', 'warc-record-id': '<urn:uuid:bec8b1fe-0659-4f47-b244-018b5dac9e30>', 'warc-refers-to': '<urn:uuid:1c918e04-8c2d-4bc0-bcfb-bf978ab0c0ea>', 'warc-target-uri': 'https://androidwedakarayo.com/before-you-look-for-a-job-please-fix-your-facebook-account/', 'warc-type': 'conversion'}, 'nb_sentences': 19, 'offset': 0}, 'text': 'ඉස්සර තමයි අපි සෝෂල්මීඩියා පාවිච්චි කරන්නේ අපි ආස නළු නිළියන්ගේ ' 'ෆොටෝ, හදපු කෑම, ඩ්\u200dරින්ක් එකක් දාන්න සෙට් වෙච්චි වෙලා...'} ``` #### deduplicated_sk * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 37910, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:ODXVMZXR34B45NQTMJIKKK2VGBGRXKEA', 'warc-date': '2021-03-01T16:29:19Z', 'warc-identified-content-language': 'slk', 'warc-record-id': '<urn:uuid:6a22612f-9bbf-4f74-8cca-0457f069baa4>', 'warc-refers-to': '<urn:uuid:3981cb48-fadf-463f-9fc9-a6d717b9dc71>', 'warc-target-uri': 'http://www.tomsta.sk/', 'warc-type': 'conversion'}, 'nb_sentences': 56, 'offset': 0}, 'text': 'Keďže všade naokolo sú iba kopce, mohol byť jedine horský. Dnes je ' 'z toho najlepší horský triatlon na Slovensku, ktor...'} ``` #### deduplicated_sl * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8130, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:UFZ4P4LVU4TXYJIHZULTCIVJ4GA3JT54', 'warc-date': '2021-03-07T14:50:23Z', 'warc-identified-content-language': 'slv,eng', 'warc-record-id': '<urn:uuid:e50a528d-ebd3-46dc-92d7-af394aaa896a>', 'warc-refers-to': '<urn:uuid:dbfe8ac4-b415-45a8-a16c-c168ed5ce37b>', 'warc-target-uri': 'https://www.edi-nm.com/si/varicosen-mnenja-cena-lekarna/', 'warc-type': 'conversion'}, 'nb_sentences': 6, 'offset': 0}, 'text': 'Po najnovejših raziskavah v Sloveniji vsaka 4. oseba med 36. in 95. ' 'letom trpi zaradi kronične venske insuficience – ...'} ``` #### deduplicated_so * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 17837, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:WIS4GECYGJYMTZMVFOUVUMRWTAPFZUSK', 'warc-date': '2021-03-03T20:11:46Z', 'warc-identified-content-language': 'bul,eng,srp', 'warc-record-id': '<urn:uuid:976de977-97b9-4517-8a42-2fc82fdda461>', 'warc-refers-to': '<urn:uuid:a0f1fbd0-b2cb-495f-93f3-53e77acae3f5>', 'warc-target-uri': 'https://studioqueens.bgnick.info/l4fOorCpgdutsnY/igra-na.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'ххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххххх...'} ``` #### deduplicated_sq * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6129, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:D3PWGEKLJKJEGOTQLYVQNUV4URWEFH2P', 'warc-date': '2021-03-09T03:17:23Z', 'warc-identified-content-language': 'sqi', 'warc-record-id': '<urn:uuid:3299bc56-c7fb-4655-bebd-393510d89aaa>', 'warc-refers-to': '<urn:uuid:1416a2ad-d319-4c60-b663-29239ff79154>', 'warc-target-uri': 'http://ata.gov.al/2019/11/03/video-u-prek-nga-termeti-ndertohet-nga-e-para-banesa-e-familjes-stafa-ne-petrele/', 'warc-type': 'conversion'}, 'nb_sentences': 11, 'offset': 0}, 'text': 'TIRANË, 3 nëntor/ATSH/- Në Petrelë të Tiranës ka nisur puna për ' 'ndërtimin nga e para të shtëpisë së familjes Stafa, e...'} ``` #### deduplicated_sr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7735, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:7LKRS7R2L2K53YTV5CYR2IAJRNIQKGBJ', 'warc-date': '2021-03-03T11:23:25Z', 'warc-identified-content-language': 'srp,eng', 'warc-record-id': '<urn:uuid:8ade8406-bedb-41a7-b854-8429b6b21214>', 'warc-refers-to': '<urn:uuid:cca5c75c-7221-4247-a51e-f7be99661793>', 'warc-target-uri': 'https://vojvodjanske.rs/40-jubilarni-somborski-polumaraton-u-nedelju-19-maja/', 'warc-type': 'conversion'}, 'nb_sentences': 4, 'offset': 0}, 'text': '„У недељу 19. маја, у Сомбору се одржава јубиларна 40. најстарија ' 'улична трка у Републици Србији, Сомборски полумарат...'} ``` #### deduplicated_su * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 14013, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:IMFFV646FPXSYLMOATX7O6CDMKUU4BFL', 'warc-date': '2021-03-09T10:29:19Z', 'warc-identified-content-language': 'sun,ind', 'warc-record-id': '<urn:uuid:02eb1f6f-7040-4b8f-b995-7c547196da4b>', 'warc-refers-to': '<urn:uuid:4a9807f7-0c98-493f-ab84-8fafc61a1e50>', 'warc-target-uri': 'https://www.masdinko.com/2019/04/soal-utspts-bahasa-sunda-sd-kelas-4.html', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Pikeun urang lembur, daun seureuh téh geus teu anéh deui. Seureuh ' 'mah mangrupa tangkal nu ngarémbét kana tangkal séjéna.'} ``` #### deduplicated_sv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 87099, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:TKLP6CG56M45ABZQGDD7EDTCQMKTSAVS', 'warc-date': '2021-03-05T20:01:45Z', 'warc-identified-content-language': 'swe', 'warc-record-id': '<urn:uuid:97860695-1688-46ef-93db-5e15742820af>', 'warc-refers-to': '<urn:uuid:7c924b0e-39e1-4921-a561-52dc5453b886>', 'warc-target-uri': 'https://fortretligheter.blogspot.com/2011/01/', 'warc-type': 'conversion'}, 'nb_sentences': 255, 'offset': 0}, 'text': 'Svenska trupper hade en kväll för flera hundra år sedan när Sverige ' 'och Danmark låg i Krig med varandra kommit med sk...'} ``` #### deduplicated_sw * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 2098, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FPGJP34F47FJQSZF62PELBLYNJ4RTCSE', 'warc-date': '2021-03-03T15:24:39Z', 'warc-identified-content-language': 'swa', 'warc-record-id': '<urn:uuid:d42018de-64be-41f9-b4b6-700dd0051ce3>', 'warc-refers-to': '<urn:uuid:a40c8328-ab33-4113-9ea1-8c35967b0bde>', 'warc-target-uri': 'http://mwanza.go.tz/videos/78', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Mkuu wa Mkoa wa Mwanza Mhe.John Mongella akifungua Baraza la ' 'biashara katika kikao kilichofanyika kwenye ukumbi wa mk...'} ``` #### deduplicated_ta * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 49341, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FQEPDKJ7AYCAEVL5SRUQ5QOULOOSHECD', 'warc-date': '2021-03-09T04:15:52Z', 'warc-identified-content-language': 'tam', 'warc-record-id': '<urn:uuid:2fa70e6a-a31a-4359-b4ff-54ce7f5d6200>', 'warc-refers-to': '<urn:uuid:92eb01ff-4f82-438b-8d1f-1722fe23285a>', 'warc-target-uri': 'https://thiru2050.blogspot.com/2019_05_26_archive.html', 'warc-type': 'conversion'}, 'nb_sentences': 15, 'offset': 0}, 'text': '... 2017 adimmix psychic leah அறிவுரை கும்பம் மேஷம் ஜோதிடம் ' 'புற்றுநோய் மகர படிக குழந்தைகள் மனநோய் புத்தகங்கள் முன்அ...'} ``` #### deduplicated_te * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 31516, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:MG3MFYW5T6XSW3XYZ4ZIKGJW5XAY2RCG', 'warc-date': '2021-03-06T18:07:45Z', 'warc-identified-content-language': 'tel', 'warc-record-id': '<urn:uuid:238b108b-d16e-41d2-b06e-464267352b0e>', 'warc-refers-to': '<urn:uuid:3663318c-d256-4c97-b71b-e4eeb2e6b58a>', 'warc-target-uri': 'https://telugu.greatandhra.com/articles/mbs/ammo-ativa-01-114908.html', 'warc-type': 'conversion'}, 'nb_sentences': 15, 'offset': 0}, 'text': 'అది 1868. ఇంగ్లండ్\u200cలోని బ్రైటన్\u200cలో క్రిస్టియానా ఎడ్మండ్స్ ' 'అనే 40 ఏళ్ల మహిళ వుండేది. పెళ్లి కాలేదు. తల్లితో కలిసి ఒక ఎ...'} ``` #### deduplicated_tg * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 16112, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:LDBVTK3U6MY7J475ZR4LRLFK2CC2QWG5', 'warc-date': '2021-03-09T03:53:03Z', 'warc-identified-content-language': 'tgk,tat,rus', 'warc-record-id': '<urn:uuid:b2519476-6812-4a38-8522-f5292b95e73a>', 'warc-refers-to': '<urn:uuid:f11fa878-d4c6-4e56-bc50-a76554b7d811>', 'warc-target-uri': 'http://hamsafon.tj/2784-imr1263z-1203avoi-1207um1203ur1251-sofu-be1171ubor-meshavad.html', 'warc-type': 'conversion'}, 'nb_sentences': 15, 'offset': 0}, 'text': 'ДУШАНБЕ, 10.01.2017/АМИТ «Ховар»/. 10 январ дар пойтахти кишвар ' 'ҳавои тағйирёбандаи бебориш дар назар дошта шудааст. ...'} ``` #### deduplicated_th * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 50841, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:MESEMAONUQXZZEA6IKBT3VCUZ43ZP4B7', 'warc-date': '2021-02-28T15:41:47Z', 'warc-identified-content-language': 'tha,eng', 'warc-record-id': '<urn:uuid:46495e6b-f22f-4dc6-86ab-3bbed66ce7e4>', 'warc-refers-to': '<urn:uuid:10946c1b-9dc5-4afb-bc74-d6baf9793a03>', 'warc-target-uri': 'https://www.thaicsr.com/2009/02/blog-post_08.html', 'warc-type': 'conversion'}, 'nb_sentences': 34, 'offset': 0}, 'text': 'ปี พ.ศ. 2521 ' 'พระบาทสมเด็จพระเจ้าอยู่หัวเสด็จเยี่ยมราษฎรบ้านพระบาทห้วยต้ม ' 'ทรงทอดพระเนตรเห็นสภาพพื้นที่และชีวิตความเป็น...'} ``` #### deduplicated_tk * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 22486, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VNR5UQCQIGPEZQBZL4VAOQDASFOVNRDL', 'warc-date': '2021-03-03T15:07:09Z', 'warc-identified-content-language': 'eng,rus', 'warc-record-id': '<urn:uuid:b514b9c5-1ccd-4cf0-bea7-ea38a5aef686>', 'warc-refers-to': '<urn:uuid:edf1f6cb-9f46-4790-8256-eb984db0f0d5>', 'warc-target-uri': 'http://www.newscentralasia.net/2020/12/02/move-forward-with-universal-right-and-responsibility/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Türkmenistanyň Daşary işler ministriniň Owganystanyň Milli Yslam ' 'Hereketi partiýasynyň ýolbaşçysy bilen duşuşygy'} ``` #### deduplicated_tl * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 15036, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:2FGV42SN72HRKRBEEQ7QJVJBLUYQPCIH', 'warc-date': '2021-03-09T04:48:08Z', 'warc-identified-content-language': 'eng,khm,lao', 'warc-record-id': '<urn:uuid:04d772d6-09db-4d5a-86c8-22b914a35b6f>', 'warc-refers-to': '<urn:uuid:f3cdcafa-5a28-4fbb-81df-7cc5e7bb3248>', 'warc-target-uri': 'http://www.ahealthyme.com/RelatedItems/RelatedDocuments.pg?d=&TypeId=121&ContentId=761&Category=DC', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'PAUNAWA: Kung nagsasalita ka ng wikang Tagalog, mayroon kang ' 'magagamit na mga libreng serbisyo para sa tulong sa wika...'} ``` #### deduplicated_tr * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 14815, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:GVNKVEGK7TMZGXIIMLV2O2YWYJRAKBO2', 'warc-date': '2021-03-04T00:44:44Z', 'warc-identified-content-language': 'tur,eng', 'warc-record-id': '<urn:uuid:7acbe6a8-83c4-4ebd-8d29-62cb0b150b2f>', 'warc-refers-to': '<urn:uuid:038ffe28-2fd1-49b9-a5c6-3dddd1af6318>', 'warc-target-uri': 'https://www.kadikoygitarkursum.com/search/label/g%C3%B6ztepe%20gitar%20dersi', 'warc-type': 'conversion'}, 'nb_sentences': 5, 'offset': 0}, 'text': 'İlk olarak, bir tek siyah kirpik takımı için fiyat belirleyin, ' "örneğin, 4000 ruble'ye eşittir. Artık bir müşteriyle ç..."} ``` #### deduplicated_tt * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 26112, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FAPA2JNYP6OL53T6OIL3SR3EGMX2R4XY', 'warc-date': '2021-03-09T04:42:07Z', 'warc-identified-content-language': 'tat,rus', 'warc-record-id': '<urn:uuid:5cac6257-fa6c-4e67-9ba1-8e7d7424ef54>', 'warc-refers-to': '<urn:uuid:52642c8d-da35-462f-9776-ccfa88353466>', 'warc-target-uri': 'http://saby-rt.ru/news/konkurslar/fotokonkurs', 'warc-type': 'conversion'}, 'nb_sentences': 12, 'offset': 0}, 'text': 'Хөрмәтле хатын-кызларбыз! Сезне чын күңелдән 8 Март бәйрәме белән ' 'тәбрик итәбез! Яраткан әниләребез, әбиләребез, гоме...'} ``` #### deduplicated_tyv * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7766, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:L5GRAANBGMGNYXDFF3ECSWJ5Q6D4QFHS', 'warc-date': '2021-02-28T07:20:44Z', 'warc-identified-content-language': 'rus', 'warc-record-id': '<urn:uuid:238082a9-0adf-4c8c-b749-1a523c91e229>', 'warc-refers-to': '<urn:uuid:4bfd0ca2-52bb-4ece-9ccf-cdcee0b30ee9>', 'warc-target-uri': 'https://tyv.wikipedia.org/wiki/%D0%A1%D0%B0%D1%80%D0%BB%D1%8B%D0%BA', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Сарлык бызаазы – ниити ады, назыны бир хар чедир, сарлыктың эр ' 'бызаазы аза сарлыктың кыс бызаазы деп чугаалаар.'} ``` #### deduplicated_ug * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 19089, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:DHYFNWWKECLR6BHWF763HC62JRCASMGH', 'warc-date': '2021-03-09T04:33:38Z', 'warc-identified-content-language': 'uig', 'warc-record-id': '<urn:uuid:d1185989-9cd6-40f2-ad63-003e405c9141>', 'warc-refers-to': '<urn:uuid:923ac168-6484-49ea-807d-be3ced85a885>', 'warc-target-uri': 'https://www.akademiye.org/ug/?p=10959', 'warc-type': 'conversion'}, 'nb_sentences': 30, 'offset': 0}, 'text': 'شەرقىي تۈركىستانئاكادېمىيە ھەققىدەئەزالىقتەۋپىق ' 'مۇكاپاتىئىئانەئالاقەTürkçeEnglishئۇيغۇرچەУйғурчәUyghurche\n' 'مىللىي مەۋج...'} ``` #### deduplicated_uk * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 16706, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:46XDNKJUJSG22BA4B6DDET2R5GMBU3LV', 'warc-date': '2021-02-26T22:04:41Z', 'warc-identified-content-language': 'ukr,eng', 'warc-record-id': '<urn:uuid:a3c68b5a-f9e8-41b6-b2bb-3d43e4d7a117>', 'warc-refers-to': '<urn:uuid:6a35e918-42ce-4349-9a6c-edcd22f07254>', 'warc-target-uri': 'https://www.interesniy.kiev.ua/vasil-boroday-korifey-mistetstva-pla/', 'warc-type': 'conversion'}, 'nb_sentences': 14, 'offset': 0}, 'text': 'На Женевському міжнародному автосалоні 2017 бренд Fiat буде ' 'показувати дві свої душі, які співіснують у великій повні...'} ``` #### deduplicated_ur * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 9450, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:3SZ3UYOSHTRE3W3PDZXRO7DDSLRKENV2', 'warc-date': '2021-03-09T03:21:23Z', 'warc-identified-content-language': 'eng,urd,bos', 'warc-record-id': '<urn:uuid:0ded0cb4-2f73-41a7-a093-5dcfed204738>', 'warc-refers-to': '<urn:uuid:6b380ef1-fec4-4f48-bcdc-86700c508dfc>', 'warc-target-uri': 'http://www.khanaghar.org/?p=50', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'اتراکھنڈ کے سلماتا گاؤں کی لڑائیتی دیوی ایک پُر اعتماد اور عقلمند ' 'مجاہد ہیں، جن کی طرف دیگر خواتین بھی دیکھ رہی ہیں۔ ...'} ``` #### deduplicated_uz * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3808, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:FYYLFGJTK74HXE2LRJOAR5E6BPGCQ5NU', 'warc-date': '2021-03-09T04:38:24Z', 'warc-identified-content-language': 'uzb,ben,ltz', 'warc-record-id': '<urn:uuid:2a56bf64-042e-47fa-9abb-819b13bf7920>', 'warc-refers-to': '<urn:uuid:155b1e81-dc6e-46dc-9544-5a6a97c05118>', 'warc-target-uri': 'https://uz.wikipedia.org/wiki/1408', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Matn Creative Commons Attribution-ShareAlike litsenziyasi boʻyicha ' 'ommalashtirilmoqda, alohida holatlarda qoʻshimcha ...'} ``` #### deduplicated_vec * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7088, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CX2L4ZL4I4OLXG7YJTXLRKNFHE7RIHRX', 'warc-date': '2021-02-24T19:06:44Z', 'warc-identified-content-language': None, 'warc-record-id': '<urn:uuid:abc5a544-7009-407a-a5a3-5c2145195bd5>', 'warc-refers-to': '<urn:uuid:4a956690-536a-437b-afe2-50dc7ac54b39>', 'warc-target-uri': 'https://vec.wikipedia.org/wiki/Utensa:Aelwyn', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Łe parołe che vien dal łatin -TAS, TATIS łe termina par -DÁ. Łe ' 'parołe che łe vien da -ICUS łe tèrmina par -ÉGO. Łe p...'} ``` #### deduplicated_vi * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7845, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CCXAI5SV5PFLNPSMP4UF4SQGGSYN37AP', 'warc-date': '2021-03-03T02:43:13Z', 'warc-identified-content-language': 'vie', 'warc-record-id': '<urn:uuid:7ce27f30-a1eb-4978-83d0-5110421393b0>', 'warc-refers-to': '<urn:uuid:5dad988d-2426-402c-ac0c-1fa811ed96dc>', 'warc-target-uri': 'http://httlvinhphuoc.org/vi/duong-linh/Hoc-Kinh-Thanh-hang-ngay/Lam-Dieu-Thien-Bang-Tinh-Yeu-Thuong-6521/', 'warc-type': 'conversion'}, 'nb_sentences': 8, 'offset': 0}, 'text': 'Bitcoin và tiền kỹ thuật số nói chung đang dần xâm nhập vào các ' 'thị trường tài chính khi ngày càng có nhiều nhà đ...'} ``` #### deduplicated_vls * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 78684, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VQNDJYOQXZLCLMDXIFCT4BHSW6LVTJQE', 'warc-date': '2021-02-28T16:16:27Z', 'warc-identified-content-language': 'fra,eng', 'warc-record-id': '<urn:uuid:266acc08-1c69-449f-95ad-0dcc82565788>', 'warc-refers-to': '<urn:uuid:c45dcd64-1b20-4ffc-bdd7-7dbff4f0a726>', 'warc-target-uri': 'https://fr.readkong.com/page/livret-des-licences-faculte-des-sciences-et-des-techniques-7906239', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': ' ' '...'} ``` #### deduplicated_vo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 1937, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:VPG56ZACAOAZTXHSSXFJOBBH44NWUSJW', 'warc-date': '2021-03-09T06:02:56Z', 'warc-identified-content-language': 'vol,eng,srp', 'warc-record-id': '<urn:uuid:2cb96947-ee22-42a8-be36-31a03203efcc>', 'warc-refers-to': '<urn:uuid:da82b7d8-535b-4e39-8d9b-ea8c3d4a4460>', 'warc-target-uri': 'https://vo.wikipedia.org/wiki/Arnesano', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'Arnesano binon zif in topäd: Puglia, in Litaliyän. Arnesano topon ' 'videtü 40° 20’ N e lunetü 18° 6’ L.'} ``` #### deduplicated_wa * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 6518, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:6NC6V46TRVMWTOHCPMTDVRTP7GGL3G3S', 'warc-date': '2021-02-26T09:47:28Z', 'warc-identified-content-language': 'wol', 'warc-record-id': '<urn:uuid:4d800a25-ccf5-4d55-9795-3f7974b988b1>', 'warc-refers-to': '<urn:uuid:87119673-154b-4246-8c39-35737821a7ff>', 'warc-target-uri': 'https://wa.wikipedia.org/wiki/Senegal', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': "Cisse pådje ci n' est co k' on djermon, dj' ô bén k' el pådje est " "djusse sibåtcheye, eyet co trop tene; et s' divreut..."} ``` #### deduplicated_war * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7356, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:SVXPIA63QN77O2IJXL4Q75LNVLDEBHYW', 'warc-date': '2021-03-09T05:49:57Z', 'warc-identified-content-language': 'war,tha,eng', 'warc-record-id': '<urn:uuid:a143ebc6-a7b4-4fa7-96b3-59ba2c1dd03c>', 'warc-refers-to': '<urn:uuid:571d090a-cb65-41e7-ae7c-d95588d41c28>', 'warc-target-uri': 'https://war.wikipedia.org/wiki/Chakri_nga_Dinastiya', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'An Chakri nga Dinastiya (Thai: ราชวงศ์จักรี: Rajawongse Chakri) ' 'namuno ngan naghadi han Thailand tikang han hi hadi T...'} ``` #### deduplicated_wuu * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 26503, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:XAH2SJIYORGGSMLN4DNJZCNVG2FVWF3C', 'warc-date': '2021-03-09T04:09:05Z', 'warc-identified-content-language': 'jpn', 'warc-record-id': '<urn:uuid:8df3f922-fbbf-4733-a3a8-9f34b7505cbf>', 'warc-refers-to': '<urn:uuid:a55eb04e-3679-4817-b94b-e0317142ab2b>', 'warc-target-uri': 'https://wpedia.goo.ne.jp/wiki/%E4%BC%8A%E5%8D%81%E4%BA%94%E5%9E%8B%E6%BD%9C%E6%B0%B4%E8%89%A6', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': '伊15 [I] | 伊17 | 伊19 | 伊21 | 伊23 | 伊25 | 伊26 | 伊27 | 伊28 | 伊29 | 伊30 ' '| 伊31 | 伊32 | 伊33 | 伊34 | 伊35 | 伊36 | 伊37 | 伊38 |...'} ``` #### deduplicated_xal * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 8598, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:KGZNUXNSFUSFYC45UQJRZPEHXNGK6C3H', 'warc-date': '2021-03-02T01:27:37Z', 'warc-identified-content-language': 'rus,spa', 'warc-record-id': '<urn:uuid:676f6ca8-706b-4f77-926f-bda90e3cd772>', 'warc-refers-to': '<urn:uuid:452efc2f-85ce-4e90-b268-2f46893172f8>', 'warc-target-uri': 'http://born.altnzam.com/2014/01/', 'warc-type': 'conversion'}, 'nb_sentences': 2, 'offset': 0}, 'text': 'Ааһ: Хоосн ааһ би, хагсхларн һанцардсн болҗ медгдҗәнә. Нанд усн йир ' 'кергтә болҗана. Ус өгит, — эзнәсн сурна.\n' 'Ааһ ууль...'} ``` #### deduplicated_xmf * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 7053, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:OQKCWDGQCIJHXMM3SCUO2KPBMFCQACUJ', 'warc-date': '2021-03-03T14:27:35Z', 'warc-identified-content-language': 'kat', 'warc-record-id': '<urn:uuid:e701a584-a14f-49ac-80b3-a7604f98fc92>', 'warc-refers-to': '<urn:uuid:8fc0f735-6e2b-45b2-bee1-bf169e08433b>', 'warc-target-uri': 'https://xmf.wikipedia.org/wiki/%E1%83%99%E1%83%90%E1%83%A2%E1%83%94%E1%83%92%E1%83%9D%E1%83%A0%E1%83%98%E1%83%90:%E1%83%90%E1%83%94%E1%83%A0%E1%83%9D%E1%83%9E%E1%83%9D%E1%83%A0%E1%83%A2%E1%83%94%E1%83%A4%E1%83%98_%E1%83%90%E1%83%9C%E1%83%91%E1%83%90%E1%83%9C%E1%83%98%E1%83%A8_%E1%83%9B%E1%83%94%E1%83%AF%E1%83%98%E1%83%9C%E1%83%90%E1%83%97', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'მოჩამილი ტექსტი წჷმორინელი რე Creative Commons ' 'Attribution-ShareAlike ლიცენზიათ; შილებე გეძინელი პირობეფიშ ' 'არსებუა. კ...'} ``` #### deduplicated_yi * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 10420, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:CZAVPSCGNW77WY2V2IJNK7R2CCUEMZFB', 'warc-date': '2021-02-24T21:10:52Z', 'warc-identified-content-language': 'yid,eng', 'warc-record-id': '<urn:uuid:7aa9e375-726d-42bd-832a-deee6dce5e4a>', 'warc-refers-to': '<urn:uuid:53354991-7bca-4134-95ce-ce7edebf841b>', 'warc-target-uri': 'http://www.kaveshtiebel.com/viewtopic.php?p=237817', 'warc-type': 'conversion'}, 'nb_sentences': 10, 'offset': 0}, 'text': 'עמעזאן איז יעצט ארויסגעקומען מיט א נייע סמארט ספיקער סיסטעם. ' "ס'הייסט Echo. אין Echo דרייט זיך א ראבאטישקע זי הייסט אל..."} ``` #### deduplicated_yo * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 3627, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:UISXP36HUEMW2LBTMAR4CTISUYAVZZAD', 'warc-date': '2021-03-07T12:45:52Z', 'warc-identified-content-language': 'yor,eng', 'warc-record-id': '<urn:uuid:e67645e9-ee6c-4c88-9b27-a158dc7f83e9>', 'warc-refers-to': '<urn:uuid:07c8d83b-7840-4238-a3b4-edc3f98ecdd5>', 'warc-target-uri': 'https://edeyorubarewa.com/itelorun/', 'warc-type': 'conversion'}, 'nb_sentences': 1, 'offset': 0}, 'text': 'A dá sílè fún àwọn ènìyàn tí wọn fẹ́ràn láti mò nípa èdè Yorùbá, ' 'àṣà àti ìṣe ilẹ̀ kóòtù ojire. Kíkó àwọn ọmọ wa ni Èd...'} ``` #### deduplicated_zh * Size of downloaded dataset files: None * Size of the generated dataset: None * Total amount of disk used: None An example of 'train' looks as follows: ``` { 'id': 0, 'meta': { 'headers': { 'content-length': 108400, 'content-type': 'text/plain', 'warc-block-digest': 'sha1:PP6MQUJB3F4G63HKKGKO2QJG7SMRMTFJ', 'warc-date': '2021-02-28T09:41:11Z', 'warc-identified-content-language': 'zho', 'warc-record-id': '<urn:uuid:132aab53-daff-4bae-83d0-a0cdb4039d00>', 'warc-refers-to': '<urn:uuid:2f26c020-f1fc-4216-a616-4683e0b25b1e>', 'warc-target-uri': 'http://www.yummtumm.com/offer', 'warc-type': 'conversion'}, 'nb_sentences': 7, 'offset': 0}, 'text': '久久精品视频在线看15_久久人人97超碰_久久爱 ' '人人澡超碰碰中文字幕,人人天天夜夜日日狠狠,久久人人97超碰,人人婷婷开心情五月,日日摸天天摸人人看,碰人人么免费视频,色综合天天综合网 ' '久久爱免费视频在线观看_久久爱视频_久久爱在线...'} ``` </details> ### Data Fields * `id`: a `int64` feature. * `meta`: Metadata * `meta.headers`: WARC Headers * `meta.headers.content-length`: `int64` Content length (in bytes) **before** cleaning * `meta.headers.content-type`: `string` MIME type * `meta.headers.warc-block-digest`:`string` Algorithm name and calculated value of a digest applied to the full block of the record * `meta.headers.warc-date`: `string` Crawl date (YYYY-MM-DDThh:mm:ssZ) * `meta.headers.warc-identified-content-language`: `string` Comma-separated list of language identifications done by CommonCrawl (uses CLD3) * `meta.headers.warc-record-id`: `string` Record ID * `meta.headers.warc-refers-to`: `string` Record-ID of a single record for which the present record holds additional content * `meta.headers.warc-target-uri`: `string` URI from where the content has been fetched * `meta.headers.warc-type`: `string` Type of the WARC Record * `meta.nb_sentences`: `int64` Number of sentences in the text * `meta.offset`: `int64` line offset where the related text begins. Should be used with `meta.nb_sentences` when reading the source files rather than using iterators to get related data. * `text`: `string` content See the [WARC Format standard](https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#warc-type-mandatory) for more details. ### Data Splits <details> <summary>Click to expand the number of samples per configuration</summary> ## Table | Language code | language | Size original | words original | size deduplicated | words deduplicated | |:----|:----------------------------|:-------|:----------------|:---------------|:----------------| | af | Afrikaans | 258MB | 44,628,392 | 157MB | 27,057,785 | | als | Alemanic | 7MB | 1,212,699 | 5MB | 871,664 | | am | Amharic | 405MB | 30,991,914 | 241MB | 18,326,043 | | an | Aragonese | 1MB | 115,938 | 608KB | 89,043 | | ar | Arabic | 69GB | 6,494,332,191 | 35GB | 3,365,025,866 | | arz | Egyptian Arabic | 48MB | 4,998,963 | 21MB | 2,341,904 | | ast | Asturian | 7MB | 1,085,670 | 4MB | 776,069 | | as | Assamese | 135MB | 7,917,923 | 95MB | 5,605,207 | | av | Avaric | 421KB | 25,104 | 325KB | 19,133 | | azb | South Azerbaijani | 47MB | 3,595,569 | 29MB | 2,243,562 | | az | Azerbaijani | 3GB | 344,187,319 | 1GB | 169,655,478 | | bar | Bavarian | 2KB | 247 | 1KB | 245 | | ba | Bashkir | 110MB | 8,121,603 | 77MB | 5,625,158 | | be | Belarusian | 2GB | 168,911,341 | 1GB | 98,212,442 | | bg | Bulgarian | 34GB | 2,994,775,106 | 15GB | 1,315,091,995 | | bh | Bihari languages | 579KB | 46,436 | 120KB | 9,181 | | bn | Bangla | 14GB | 814,550,777 | 7GB | 466,289,242 | | bo | Tibetan | 439MB | 3,751,935 | 358MB | 2,797,085 | | bpy | Bishnupriya | 11MB | 558,819 | 4MB | 280,825 | | br | Breton | 49MB | 8,067,480 | 23MB | 4,032,467 | | bs | Bosnian | 310KB | 50,266 | 175KB | 25,157 | | bxr | Russia Buriat | 22KB | 1,625 | 18KB | 1,335 | | ca | Catalan | 13GB | 2,110,833,307 | 6GB | 1,012,770,904 | | cbk | Chavacano | 168B | 2 | 168B | 2 | | ceb | Cebuano | 81MB | 12,921,589 | 58MB | 9,201,870 | | ce | Chechen | 29MB | 2,283,093 | 20MB | 1,638,963 | | ckb | Central Kurdish | 784MB | 63,417,572 | 367MB | 29,355,017 | | cs | Czech | 72GB | 9,996,052,434 | 33GB | 4,739,928,730 | | cv | Chuvash | 60MB | 4,592,449 | 41MB | 3,141,872 | | cy | Welsh | 307MB | 50,606,998 | 180MB | 30,198,860 | | da | Danish | 18GB | 2,892,004,180 | 10GB | 1,704,605,898 | | de | German | 433GB | 58,716,727,164 | 184GB | 25,446,071,671 | | diq | Dimli (individual language) | 294B | 38 | 147B | 19 | | dsb | Lower Sorbian | 31KB | 4,115 | 14KB | 1,873 | | dv | Divehi | 143MB | 8,293,093 | 111MB | 6,481,260 | | el | Greek | 72GB | 6,024,414,850 | 30GB | 2,539,719,195 | | eml | Unknown language [eml] | 22KB | 4,360 | 20KB | 3,876 | | en | English | 2936GB | 488,723,815,522 | 1342GB | 223,669,114,922 | | eo | Esperanto | 560MB | 84,432,772 | 390MB | 59,411,208 | | es | Spanish | 342GB | 54,715,337,438 | 160GB | 25,877,724,186 | | et | Estonian | 7GB | 954,732,803 | 3GB | 455,553,053 | | eu | Basque | 900MB | 110,676,692 | 503MB | 62,812,888 | | fa | Persian | 79GB | 8,566,653,720 | 35GB | 3,902,206,854 | | fi | Finnish | 35GB | 4,074,911,658 | 20GB | 2,357,264,196 | | frr | Northern Frisian | 7KB | 1,702 | 5KB | 1,267 | | fr | French | 340GB | 52,839,365,242 | 161GB | 25,245,127,073 | | fy | Western Frisian | 82MB | 13,094,538 | 57MB | 9,329,828 | | ga | Irish | 131MB | 20,142,627 | 69MB | 10,835,410 | | gd | Scottish Gaelic | 2MB | 332,946 | 1MB | 173,588 | | gl | Galician | 989MB | 155,030,216 | 549MB | 87,015,417 | | gn | Guarani | 32KB | 3,828 | 25KB | 3,056 | | gom | Goan Konkani | 3MB | 177,357 | 2MB | 148,801 | | gu | Gujarati | 1GB | 124,652,589 | 950MB | 63,150,641 | | gv | Manx | 1KB | 264 | 907B | 141 | | he | Hebrew | 29GB | 2,829,132,925 | 11GB | 1,156,588,919 | | hi | Hindi | 26GB | 2,009,754,819 | 13GB | 1,038,914,735 | | hr | Croatian | 361MB | 51,654,735 | 169MB | 24,583,270 | | hsb | Upper Sorbian | 2MB | 305,176 | 1MB | 207,715 | | ht | Haitian Creole | 2KB | 592 | 1KB | 351 | | hu | Hungarian | 60GB | 7,415,936,687 | 29GB | 3,765,883,306 | | hy | Armenian | 4GB | 322,429,587 | 1GB | 124,515,953 | | ia | Interlingua | 291KB | 74,696 | 172KB | 41,625 | | id | Indonesian | 40GB | 5,767,715,387 | 22GB | 3,126,926,138 | | ie | Interlingue | 7KB | 1,432 | 2KB | 424 | | ilo | Iloko | 1MB | 275,029 | 857KB | 140,579 | | io | Ido | 276KB | 46,463 | 221KB | 36,976 | | is | Icelandic | 2GB | 290,997,158 | 1GB | 176,018,529 | | it | Italian | 192GB | 29,252,541,808 | 94GB | 14,426,829,908 | | ja | Japanese | 208GB | 5,357,000,179 | 96GB | 1,319,938,248 | | jbo | Lojban | 929KB | 179,684 | 731KB | 140,749 | | jv | Javanese | 858KB | 121,271 | 728KB | 101,386 | | ka | Georgian | 6GB | 304,329,117 | 2GB | 116,422,468 | | kk | Kazakh | 3GB | 236,767,203 | 1GB | 126,886,720 | | km | Khmer | 1GB | 28,188,612 | 860MB | 13,408,408 | | kn | Kannada | 2GB | 111,460,546 | 1GB | 56,801,321 | | ko | Korean | 35GB | 3,367,279,749 | 15GB | 1,475,474,588 | | krc | Karachay-Balkar | 2MB | 193,207 | 2MB | 153,755 | | ku | Kurdish | 152MB | 23,845,402 | 108MB | 17,264,310 | | kv | Komi | 1MB | 89,105 | 588KB | 46,219 | | kw | Cornish | 119KB | 20,775 | 72KB | 12,687 | | ky | Kyrgyz | 485MB | 33,401,287 | 334MB | 23,102,129 | | la | Latin | 103MB | 15,869,314 | 9MB | 1,488,545 | | lb | Luxembourgish | 54MB | 7,953,887 | 37MB | 5,454,220 | | lez | Lezghian | 2MB | 214,890 | 2MB | 198,433 | | li | Limburgish | 76KB | 12,105 | 54KB | 8,472 | | lmo | Lombard | 1MB | 203,002 | 1MB | 182,533 | | lo | Lao | 287MB | 6,928,229 | 163MB | 3,620,360 | | lrc | Northern Luri | 183B | 26 | 183B | 26 | | lt | Lithuanian | 12GB | 1,573,926,673 | 5GB | 701,326,575 | | lv | Latvian | 6GB | 799,923,431 | 2GB | 352,753,044 | | mai | Maithili | 685KB | 144,859 | 24KB | 1,916 | | mg | Malagasy | 59MB | 8,103,631 | 38MB | 5,220,655 | | mhr | Eastern Mari | 15MB | 1,170,650 | 10MB | 784,071 | | min | Minangkabau | 8MB | 451,591 | 1MB | 74,882 | | mk | Macedonian | 3GB | 261,571,966 | 1GB | 134,544,934 | | ml | Malayalam | 4GB | 182,898,691 | 2GB | 87,615,430 | | mn | Mongolian | 1GB | 143,244,180 | 912MB | 71,138,550 | | mrj | Western Mari | 645KB | 51,812 | 521KB | 41,950 | | mr | Marathi | 3GB | 173,001,078 | 1GB | 99,858,901 | | ms | Malay | 146MB | 20,433,250 | 60MB | 8,301,250 | | mt | Maltese | 51MB | 6,162,888 | 26MB | 3,179,815 | | mwl | Mirandese | 3KB | 419 | 2KB | 302 | | my | Burmese | 2GB | 54,624,239 | 1GB | 35,969,724 | | myv | Erzya | 29KB | 2,844 | 2KB | 236 | | mzn | Mazanderani | 1MB | 134,128 | 1MB | 106,533 | | nah | Nahuatl languages | 34KB | 3,664 | 21KB | 2,363 | | nap | Neapolitan | 1KB | 550 | 1KB | 235 | | nds | Low German | 25MB | 3,998,912 | 17MB | 2,868,608 | | ne | Nepali | 3GB | 207,891,824 | 2GB | 142,087,100 | | new | Newari | 6MB | 433,880 | 4MB | 254,711 | | nl | Dutch | 97GB | 15,248,924,083 | 47GB | 7,584,055,321 | | nn | Norwegian Nynorsk | 123MB | 20,629,675 | 66MB | 11,095,804 | | no | Norwegian Bokmål | 9GB | 1,492,984,384 | 4GB | 776,354,517 | | oc | Occitan | 12MB | 1,822,595 | 5MB | 834,187 | | or | Odia | 538MB | 30,838,706 | 357MB | 20,357,839 | | os | Ossetic | 11MB | 911,794 | 6MB | 536,525 | | pam | Pampanga | 3KB | 405 | 3KB | 405 | | pa | Punjabi | 769MB | 59,031,334 | 430MB | 33,413,527 | | pl | Polish | 122GB | 16,120,806,481 | 48GB | 6,496,098,108 | | pms | Piedmontese | 4MB | 804,600 | 3MB | 644,017 | | pnb | Western Panjabi | 68MB | 7,757,785 | 45MB | 5,221,168 | | ps | Pashto | 404MB | 49,643,597 | 286MB | 35,345,424 | | pt | Portuguese | 159GB | 24,770,395,312 | 71GB | 11,190,148,216 | | qu | Quechua | 322KB | 40,691 | 230KB | 29,108 | | rm | Romansh | 3KB | 512 | 3KB | 429 | | ro | Romanian | 37GB | 5,629,438,576 | 15GB | 2,387,230,734 | | rue | Rusyn | 247B | 14 | 247B | 14 | | ru | Russian | 1201GB | 89,568,364,811 | 542GB | 41,194,052,384 | | sah | Sakha | 57MB | 2,600,989 | 39MB | 1,944,651 | | sa | Sanskrit | 72MB | 3,288,786 | 43MB | 1,998,089 | | scn | Sicilian | 4KB | 712 | 3KB | 516 | | sco | Scots | 1KB | 523 | 1KB | 282 | | sd | Sindhi | 75MB | 8,937,427 | 50MB | 6,064,102 | | sh | Serbian (Latin) | 13MB | 2,164,175 | 9MB | 1,461,045 | | si | Sinhala | 1GB | 91,456,436 | 791MB | 47,770,919 | | sk | Slovak | 14GB | 2,002,088,524 | 6GB | 865,456,498 | | sl | Slovenian | 4GB | 610,843,131 | 1GB | 288,222,997 | | so | Somali | 15KB | 849 | 13KB | 449 | | sq | Albanian | 3GB | 493,861,192 | 1GB | 257,278,518 | | sr | Serbian | 6GB | 574,460,746 | 3GB | 289,211,579 | | su | Sundanese | 397KB | 54,420 | 274KB | 37,082 | | sv | Swedish | 43GB | 6,542,433,732 | 19GB | 2,964,887,952 | | sw | Swahili | 11MB | 1,853,022 | 7MB | 1,279,350 | | ta | Tamil | 10GB | 438,489,984 | 5GB | 215,856,584 | | te | Telugu | 3GB | 182,268,133 | 1GB | 73,193,605 | | tg | Tajik | 985MB | 79,016,232 | 321MB | 26,069,632 | | th | Thai | 62GB | 1,694,658,532 | 26GB | 635,230,676 | | tk | Turkmen | 25MB | 2,693,720 | 20MB | 2,221,760 | | tl | Filipino | 699MB | 115,471,760 | 383MB | 62,473,283 | | tr | Turkish | 73GB | 8,763,467,387 | 33GB | 3,950,989,357 | | tt | Tatar | 947MB | 68,793,924 | 424MB | 31,485,000 | | tyv | Tuvinian | 9KB | 638 | 7KB | 542 | | ug | Uyghur | 187MB | 12,786,741 | 123MB | 8,410,269 | | uk | Ukrainian | 53GB | 4,014,675,914 | 28GB | 2,131,491,321 | | ur | Urdu | 2GB | 354,937,986 | 1GB | 234,111,239 | | uz | Uzbek | 56MB | 6,237,371 | 28MB | 3,327,595 | | vec | Venetian | 37KB | 6,694 | 28KB | 5,139 | | vi | Vietnamese | 87GB | 14,523,772,784 | 42GB | 7,011,404,625 | | vls | West Flemish | 134B | 2 | 134B | 2 | | vo | Volapük | 2MB | 426,052 | 2MB | 410,688 | | war | Waray | 4MB | 750,162 | 4MB | 702,336 | | wa | Walloon | 511KB | 93,163 | 329KB | 59,906 | | wuu | Wu Chinese | 145KB | 9,130 | 69KB | 3,031 | | xal | Kalmyk | 62KB | 5,495 | 62KB | 5,495 | | xmf | Mingrelian | 16MB | 807,158 | 10MB | 510,700 | | yi | Yiddish | 199MB | 18,699,112 | 93MB | 8,716,366 | | yo | Yoruba | 229KB | 34,468 | 120KB | 17,487 | | zh | Chinese | 500GB | 10,118,381,906 | 266GB | 3,898,987,727 | </details> ## Dataset Creation ### Curation Rationale OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText). OSCAR 21.09 follows the [OSCAR Schema v1.1](https://oscar-corpus.com/post/oscar-schema-v1-1/), which adds metadata to each entry while staying backwards-compatible with OSCAR. The order of operations is similar as in the goclassy pipeline, with optimisations regarding IO and a finer granlularity regarding multithreading. `Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy. Threading is done at shard, record and sentence level, making the whole generation process much more efficient. Filtering is done at line-level, removing lines shorter than 100 UTF-8 codepoints. While invalid UTF-8 characters are detected, they are not removed, but rather replaced with the [Replacement character](https://en.wikipedia.org/wiki/Special_(Unicode_block)#Replacement_character). After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed. ### Source Data #### Initial Data Collection and Normalization [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies. Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics. To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **February 2021** snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers. #### Who are the source language producers? The data comes from multiple web pages in a large variety of languages. ### Annotations The dataset does not contain any additional annotations. #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models. ## Considerations for Using the Data ### Social Impact of Dataset OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures. ### Discussion of Biases OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models. ### Other Known Limitations The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571). ## Additional Information ### Dataset Curators The corpus was put together by [Julien Abadji](https://ujj.space), [Pedro Ortiz Suarez](https://portizs.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/). ### Licensing Information These data are released under this licensing scheme We do not own any of the text from which these data has been extracted. We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR This work is published from: France. Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. * Clearly identify the copyrighted work claimed to be infringed. * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. We will comply to legitimate requests by removing the affected sources from the next release of the corpus. ### Citation Information ``` @inproceedings{AbadjiOrtizSuarezRomaryetal.2021, author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot}, title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)}, editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta}, publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-10468}, url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688}, pages = {1 -- 9}, year = {2021}, abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.}, language = {en} } @ARTICLE{caswell-etal-2021-quality, author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa}, title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence}, year = 2021, month = mar, eid = {arXiv:2103.12028}, pages = {arXiv:2103.12028}, archivePrefix = {arXiv}, eprint = {2103.12028}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @inproceedings{ortiz-suarez-etal-2020-monolingual, title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages", author = "Ortiz Su{'a}rez, Pedro Javier and Romary, Laurent and Sagot, Benoit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.156", pages = "1703--1714", abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.", } @inproceedings{OrtizSuarezSagotRomary2019, author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary}, title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019}, editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi}, publisher = {Leibniz-Institut f{"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-9021}, url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215}, pages = {9 -- 16}, year = {2019}, abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.}, language = {en} } ``` ### Contributions Thanks to [@pjox](https://github.com/pjox), [@Uinelj](https://github.com/Uinelj) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
jiangjiechen/ekar_english
--- language: - en license: - afl-3.0 size_categories: - 1K<n<2K source_datasets: - original task_categories: - question-answering - text-generation task_ids: - analogical-qa - explanation-generation --- # Dataset Card for ekar_english ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ekar-leaderboard.github.io - **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311) - **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview - **Point of Contact:** jjchen19@fudan.edu.cn ### Dataset Summary ***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io. The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area. ### Supported Tasks and Leaderboards - `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA. - `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning. This dataset supports two task modes: EASY mode and HARD mode: - `EASY mode`: where query explanation can be used as part of the input. - `HARD mode`: no explanation is allowed as part of the input. ### Languages This dataset is in English, which is translated from [its Chinese version](https://huggingface.co/datasets/Jiangjie/ekar_chinese/) ## Dataset Structure ### Data Instances ```json { "id": "982f17-en", "question": "plant:coal", "choices": { "label": [ "A", "B", "C", "D" ], "text": [ "white wine:aged vinegar", "starch:corn", "milk:yogurt", "pickled cabbage:cabbage" ] }, "answerKey": "C", "explanation": [ "\"plant\" is the raw material of \"coal\".", "both \"white wine\" and \"aged vinegar\" are brewed.", "\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.", "\"yogurt\" is made from \"milk\".", "\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query." ], "relation": [ [["plant", "coal", "R3.7"]], [["white wine", "aged vinegar", "R2.4"]], [["corn", "starch", "R3.7"]], [["milk", "yogurt", "R3.7"]], [["cabbage", "pickled cabbage", "R3.7"]] ] } ``` ### Data Fields - id: a string identifier for each example. - question: query terms. - choices: candidate answer terms. - answerKey: correct answer. - explanation: explanations for query (1st) and candidate answers (2nd-5th). - relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th). ### Data Splits | name |train|validation|test| |:-----:|:---:|:--------:|:--:| |default| 870| 119| 262| |description| | | blinded | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons. ### Discussion of Biases This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture. ### Other Known Limitations 1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning. 2. E-KAR only presents one feasible explanation for each problem, whereas there may be several. 3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc. ## Additional Information ### Dataset Curators The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University. ### Licensing Information [Needs More Information] ### Citation Information ```latex @inproceedings{chen-etal-2022-e, title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning", author = "Chen, Jiangjie and Xu, Rui and Fu, Ziquan and Shi, Wei and Li, Zhongqiao and Zhang, Xinbo and Sun, Changzhi and Li, Lei and Xiao, Yanghua and Zhou, Hao", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.311", pages = "3941--3955", } ```
MLCommons/peoples_speech_v1.0
--- annotations_creators: - crowdsourced - machine-generated language_creators: - crowdsourced - machine-generated language: - en license: - cc-by-2.0 - cc-by-2.5 - cc-by-3.0 - cc-by-4.0 - cc-by-sa-3.0 - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: People's Speech size_categories: - 1T<n source_datasets: - original task_categories: - automatic-speech-recognition task_ids: - speech-recognition - robust-speech-recognition - noisy-speech-recognition --- # Dataset Card for People's Speech ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://mlcommons.org/en/peoples-speech/ - **Repository:** https://github.com/mlcommons/peoples-speech - **Paper:** https://arxiv.org/abs/2111.09344 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org) ### Dataset Summary The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances { "id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac", "audio": { "path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac" "array": array([-6.10351562e-05, ...]), "sampling_rate": 16000 } "duration_ms": 14490, "text": "contends that the suspension clause requires a [...]" } ### Data Fields { "id": datasets.Value("string"), "audio": datasets.Audio(sampling_rate=16_000), "duration_ms": datasets.Value("int32"), "text": datasets.Value("string"), } ### Data Splits We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations. ## Dataset Creation ### Curation Rationale See our [paper](https://arxiv.org/abs/2111.09344). ### Source Data #### Initial Data Collection and Normalization Data was downloaded via the archive.org API. No data inference was done. #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process No manual annotation is done. We download only source audio with already existing transcripts. #### Who are the annotators? For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems. ### Personal and Sensitive Information Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this. ## Considerations for Using the Data ### Social Impact of Dataset The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis. The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset. Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time. ### Discussion of Biases Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there. Almost all of our data is American accented English. ### Other Known Limitations As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it. ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information We provide CC-BY and CC-BY-SA subsets of the dataset. ### Citation Information Please cite: ``` @article{DBLP:journals/corr/abs-2111-09344, author = {Daniel Galvez and Greg Diamos and Juan Ciro and Juan Felipe Cer{\'{o}}n and Keith Achorn and Anjali Gopi and David Kanter and Maximilian Lam and Mark Mazumder and Vijay Janapa Reddi}, title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition Dataset for Commercial Usage}, journal = {CoRR}, volume = {abs/2111.09344}, year = {2021}, url = {https://arxiv.org/abs/2111.09344}, eprinttype = {arXiv}, eprint = {2111.09344}, timestamp = {Mon, 22 Nov 2021 16:44:07 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
sil-ai/bloom-captioning
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - afr - af - aaa - abc - ada - adq - aeu - agq - ags - ahk - aia - ajz - aka - ak - ame - amh - am - amp - amu - ann - aph - awa - awb - azn - azo - bag - bam - bm - baw - bax - bbk - bcc - bce - bec - bef - ben - bn - bfd - bfm - bfn - bgf - bho - bhs - bis - bi - bjn - bjr - bkc - bkh - bkm - bkx - bob - bod - bo - boz - bqm - bra - brb - bri - brv - bss - bud - buo - bwt - bwx - bxa - bya - bze - bzi - cak - cbr - ceb - cgc - chd - chp - cim - clo - cmn - zh - cmo - csw - cuh - cuv - dag - ddg - ded - deu - de - dig - dje - dmg - dnw - dtp - dtr - dty - dug - eee - ekm - enb - enc - eng - en - ewo - fas - fa - fil - fli - fon - fra - fr - fub - fuh - gal - gbj - gou - gsw - guc - guj - gu - guz - gwc - hao - hat - ht - hau - ha - hbb - hig - hil - hin - hi - hla - hna - hre - hro - idt - ilo - ind - id - ino - isu - ita - it - jgo - jmx - jpn - ja - jra - kak - kam - kan - kn - kau - kr - kbq - kbx - kby - kek - ken - khb - khm - km - kik - ki - kin - rw - kir - ky - kjb - kmg - kmr - ku - kms - kmu - kor - ko - kqr - krr - ksw - kur - ku - kvt - kwd - kwu - kwx - kxp - kyq - laj - lan - lao - lo - lbr - lfa - lgg - lgr - lhm - lhu - lkb - llg - lmp - lns - loh - lsi - lts - lug - lg - luy - lwl - mai - mal - ml - mam - mar - mr - mdr - mfh - mfj - mgg - mgm - mgo - mgq - mhx - miy - mkz - mle - mlk - mlw - mmu - mne - mnf - mnw - mot - mqj - mrn - mry - msb - muv - mve - mxu - mya - my - myk - myx - mzm - nas - nco - nep - ne - new - nge - ngn - nhx - njy - nla - nld - nl - nlv - nod - nsk - nsn - nso - nst - nuj - nwe - nwi - nxa - nxl - nya - ny - nyo - nyu - nza - odk - oji - oj - oki - omw - ori - or - ozm - pae - pag - pan - pa - pbt - pce - pcg - pdu - pea - pex - pis - pkb - pmf - pnz - por - pt - psp - pwg - qaa - qub - quc - quf - quz - qve - qvh - qvm - qvo - qxh - rel - rnl - ron - ro - roo - rue - rug - rus - ru - san - sa - saq - sat - sdk - sea - sgd - shn - sml - snk - snl - som - so - sot - st - sox - spa - es - sps - ssn - stk - swa - sw - swh - sxb - syw - taj - tam - ta - tbj - tdb - tdg - tdt - teo - tet - tgk - tg - tha - th - the - thk - thl - thy - tio - tkd - tnl - tnn - tnp - tnt - tod - tom - tpi - tpl - tpu - tsb - tsn - tn - tso - ts - tuv - tuz - tvs - udg - unr - urd - ur - uzb - uz - ven - ve - vie - vi - vif - war - wbm - wbr - wms - wni - wnk - wtk - xho - xh - xkg - xmd - xmg - xmm - xog - xty - yas - yav - ybb - ybh - ybi - ydd - yea - yet - yid - yi - yin - ymp - zaw - zho - zh - zlm - zuh - zul - zu license: - cc-by-nc-4.0 multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_ids: - image-captioning paperswithcode_id: null pretty_name: BloomCaptioning extra_gated_prompt: |- One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled explicitly with a `cc-by-sa` license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample. These [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that: 1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:sj@derivation.co). 2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. 3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material. In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree: 1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised. 2. That your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox --- ## Dataset Description - **Homepage:** [SIL AI](https://ai.sil.org/) - **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org) - **Source Data:** [Bloom Library](https://bloomlibrary.org/) ![logo for Bloom Library](https://bloom-vist.s3.amazonaws.com/bloom_logo.png) ![sil-ai logo](https://s3.amazonaws.com/moonup/production/uploads/1661440873726-6108057a823007eaf0c7bd10.png) ## Dataset Summary **Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development. This version of the Bloom Library data is developed specifically for the image captioning task. It includes data from 351 languages across 31 language families. There is a mean of 32 stories and 319 image-caption pairs per language. **Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know! **Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉 ## Languages Of the 500+ languages listed at BloomLibrary.org, there are 351 languages available in this dataset. Here are the corresponding ISO 639-3 codes: aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul ## Dataset Statistics Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following statistics: | ISO 639-3 | stories | image-caption pairs | |:------------|-----------:|-----------------------:| | ahk | 101 | 907 | | awa | 163 | 1200 | | bam | 4 | 86 | | ben | 251 | 2235 | | bho | 173 | 1172 | | boz | 5 | 102 | | bzi | 66 | 497 | | cak | 67 | 817 | | ceb | 418 | 2953 | | cgc | 197 | 1638 | | chd | 1 | 84 | | dty | 172 | 1310 | | eng | 2633 | 28618 | | fas | 129 | 631 | | fra | 403 | 5278 | | hat | 260 | 2411 | | hau | 256 | 1865 | | hbb | 27 | 273 | | ind | 259 | 2177 | | jra | 139 | 1423 | | kak | 195 | 1416 | | kan | 21 | 168 | | kek | 36 | 621 | | kir | 382 | 4026 | | kjb | 102 | 984 | | kor | 132 | 2773 | | mai | 180 | 1211 | | mai | 180 | 1211 | | mam | 134 | 1317 | | mhx | 98 | 945 | | mya | 38 | 421 | | myk | 34 | 341 | | nep | 200 | 1507 | | new | 177 | 1225 | | por | 163 | 3101 | | quc | 99 | 817 | | rus | 353 | 3933 | | sdk | 11 | 153 | | snk | 35 | 356 | | spa | 528 | 6111 | | stk | 7 | 113 | | tgl | 0 | 0 | | tha | 285 | 3023 | | thl | 185 | 1464 | | tpi | 201 | 2162 | ## Dataset Structure ### Data Instances The examples look like this for Hausa: ``` from datasets import load_dataset # Specify the language code. dataset = load_dataset("sil-ai/bloom-captioning", iso639_3_letter_code, use_auth_token=True, download_mode='force_redownload') # An entry in the dataset consists of a image caption along with # a link to the corresponding image (and various pieces of metadata). print(dataset['train'][0]) ``` This would produce an output: ``` {'image_id': '5e7e2ab6-493f-4430-a635-695fbff76cf0', 'image_url': 'https://bloom-vist.s3.amazonaws.com/%E0%A4%AF%E0%A5%87%E0%A4%B8%E0%A5%81%20%E0%A4%9A%E0%A5%81%E0%A4%B5%E0%A4%BE%20%E0%A4%89%E0%A4%A0%E0%A5%81%E0%A4%99%E0%A5%8D%E2%80%8C%E0%A4%99%E0%A4%BF%20%E0%A4%B2%E0%A4%BE%E0%A4%AE%E0%A5%8D%E2%80%8C%E0%A4%9F%E0%A4%BF%E0%A4%AF%E0%A4%BE%E0%A4%A8%E0%A4%BE/image2.jpg', 'caption': 'Lokacinan almajiran suna tuƙa jirgin ruwansu, amma can cikin dare sun kai tsakiyar tafkin kaɗai. Suna tuƙi da wahala saboda iska tana busawa da ƙarfi gaba da su.', 'story_id': 'cd17125d-66c6-467c-b6c3-7463929faff9', 'album_id': 'a3074fc4-b88f-4769-a6de-dc952fdb35f0', 'original_bloom_language_tag': 'ha', 'index_in_story': 0} ``` To download all of the images locally directory `images`, you can do something similar to the following: ``` from PIL import Image import urllib from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: if 'png' in image_url: png = Image.open(io.BytesIO(req.read())).convert('RGBA') png.load() # required for png.split() background = Image.new("RGB", png.size, (255, 255, 255)) background.paste(png, mask=png.split()[3]) # 3 is the alpha channel image_id = str(uuid.uuid4()) image_path = "images/" + image_id + ".jpg" background.save(image_path, 'JPEG', quality=80) else: image = Image.open(io.BytesIO(req.read())) image_id = str(uuid.uuid4()) image_path = "images/" + image_id + ".jpg" image.save(image_path) return image_path def fetch_images(batch, num_threads, timeout=None, retries=3): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image_path"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dataset = dataset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` ### Data Fields The metadata fields below are available: - **image_id**: a unique ID for the image - **image_url**: a link for downloading the image - **caption**: a caption corresponding to the image - **story_id**: a unique ID for the corresponding story in which the caption appears - **album_id**: a unique ID for the corresponding album in which the image appears - **original_bloom_language_tag**: the original language identification from the Bloom library - **index_in_story**: an index corresponding to the order of the image-caption pair in the corresponding story ### Data Splits All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments. **NOTE:** The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder `<hidden>` token. ## Changelog - **25 October 2022** - Initial release - **25 October 2022** - Update to include licenses on each data item.
knkarthick/AMI
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10<n<1000 source_datasets: - original task_categories: - summarization task_ids: [] pretty_name: AMI Corpus --- # Dataset Card for AMI Corpus ## Dataset Description ### Links - **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/ - **Repository:** https://groups.inf.ed.ac.uk/ami/download/ - **Paper:** https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section. #### Synchronised recording devices: close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens. #### Annotation: orthographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ). Although the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. All of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0). ### Languages English ## Dataset Structure ### Data Instances AMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation. The first instance in the training set: {'id': '30', 'summary': "The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.", 'dialogue': "Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\r\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\r\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\r\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 209 - val: 42 - test: 28 ## Dataset Creation ### Curation Rationale Refer Above. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: cc-by-4.0 ## Citation Information ``` Carletta, J. (2006) Announcing the AMI Meeting Corpus. The ELRA Newsletter 11(1), January-March, p. 3-5 ``` ## Contributions Thanks to Carletta for adding this dataset.
alkzar90/croupier-mtg-dataset
--- annotations_creators: - found language: [] language_creators: [] license: - apache-2.0 multilinguality: [] pretty_name: 'Croupier: a Magic the Gathering creatures dataset' size_categories: - 1K<n<10K source_datasets: - original tags: - mgt - magic-card-game - creature-dataset task_categories: - image-classification task_ids: - multi-class-image-classification --- ## Dataset Description - **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/) - **Repository:** https://github.com/alcazar90/croupier-mtg-dataset ### Dataset Summary A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie. ## Dataset Creation All card information from Magic the Gathering card game is public available from the [Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just a subset selection of 4 kind of creatures from the game.
abid/indonesia-bioner-dataset
--- license: bsd-3-clause-clear --- ### Indonesia BioNER Dataset This dataset taken from online health consultation platform Alodokter.com which has been annotated by two medical doctors. Data were annotated using IOB in CoNLL format. Dataset contains 2600 medical answers by doctors from 2017-2020. Two medical experts were assigned to annotate the data into two entity types: DISORDERS and ANATOMY. The topics of answers are: diarrhea, HIV-AIDS, nephrolithiasis and TBC, which marked as high-risk dataset from WHO. This work is possible by generous support from Dr. Diana Purwitasari and Safitri Juanita. > Note: this data is provided as is in Bahasa Indonesia. No translations are provided. | File | Amount | |-------------|--------| | train.conll | 1950 | | valid.conll | 260 | | test.conll | 390 |
HuggingFaceM4/cm4-synthetic-testing
--- license: bigscience-openrail-m --- This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset. The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`. The `unique` ones ensure uniqueness across text entries. The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation. The default split is `100.unique`. The full process of this dataset creation is documented inside [cm4-synthetic-testing.py](./cm4-synthetic-testing.py).
julien-c/titanic-survival
--- license: cc tags: - tabular-classification task_categories: - tabular-classification --- ## Titanic Survival from https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/problem12.html
bigbio/swedish_medical_ner
--- language: - sv bigbio_language: - Swedish license: cc-by-sa-4.0 multilinguality: monolingual bigbio_license_shortname: CC_BY_SA_4p0 pretty_name: Swedish Medical NER homepage: https://github.com/olofmogren/biomedical-ner-data-swedish/ bigbio_pubmed: False bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION --- # Dataset Card for Swedish Medical NER ## Dataset Description - **Homepage:** https://github.com/olofmogren/biomedical-ner-data-swedish/ - **Pubmed:** False - **Public:** True - **Tasks:** NER swedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish. It consists three subsets which are in turn derived from three different sources respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt), and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen subsets in total contains over 790000 sequences with 60 characters each, the 1177 Vårdguiden subset is manually annotated and contains 927 sentences, 2740 annotations, out of which 1574 are disorder and findings, 546 are pharmaceutical drug, and 620 are body structure. Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually annotated. ## Citation Information ``` @inproceedings{almgren-etal-2016-named, author = { Almgren, Simon and Pavlov, Sean and Mogren, Olof }, title = {Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs}, booktitle = {Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)}, publisher = {The COLING 2016 Organizing Committee}, pages = {30-39}, year = {2016}, month = {12}, url = {https://aclanthology.org/W16-5104}, eprint = {https://aclanthology.org/W16-5104.pdf} } ```
Murple/ksponspeech
--- annotations_creators: - expert-generated language: - ko language_creators: - crowdsourced license: [] multilinguality: - monolingual pretty_name: KsponSpeech size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - automatic-speech-recognition task_ids: [] --- # Dataset Card for KsponSpeech ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [AIHub](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123) - **Repository:** - **Paper:** [KsponSpeech](https://www.mdpi.com/2076-3417/10/19/6936) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Korean ## Dataset Structure ### Data Instances ```json { 'id': 'KsponSpeech_E00001', 'audio': {'path': None, 'array': array([0.0010376 , 0.00085449, 0.00097656, ..., 0.00250244, 0.0022583 , 0.00253296]), 'sampling_rate': 16000}, 'text': '어 일단은 억지로 과장해서 이렇게 하는 것보다 진실된 마음으로 이걸 어떻게 전달할 수 있을까 공감을 시킬 수 있을까 해서 좀' } ``` ### Data Fields - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - id: unique id of the data sample. ### Data Splits | | Train | Valid | eval.clean | eval.other | | ----- | ------ | ----- | ---- | ---- | | #samples | 620000 | 2545 | 3000 | 3000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @Article{app10196936, AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun}, TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition}, JOURNAL = {Applied Sciences}, VOLUME = {10}, YEAR = {2020}, NUMBER = {19}, ARTICLE-NUMBER = {6936}, URL = {https://www.mdpi.com/2076-3417/10/19/6936}, ISSN = {2076-3417}, ABSTRACT = {This paper introduces a large-scale spontaneous speech corpus of Korean, named KsponSpeech. This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.}, DOI = {10.3390/app10196936} } ```
lsy641/PsyQA
--- license: mit --- The data is originally source from (Sun et al,2021). (Liu et al, 2023) processed the data to make it a dataset vis huggingface api with taining/validation/testing splitting **Please cite:** ``` @misc{liu2023enhancing, title={Enhancing Long-form Text Generation in Mental Health with Task-adaptive Tokenization}, author={Siyang Liu and Naihao Deng and Sahand Sabour and Yilin Jia and Minlie Huang and Rada Mihalcea}, year={2023}, eprint={2310.05317}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @inproceedings{sun2021psyqa, title={PsyQA: A Chinese Dataset for Generating Long Counseling Text for Mental Health Support}, author={Sun, Hao and Lin, Zhenru and Zheng, Chujie and Liu, Siyang and Huang, Minlie}, booktitle={Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021}, pages={1489--1503}, year={2021} } ```
derek-thomas/squad-v1.1-t5-question-generation
--- dataset_info: features: - name: context dtype: string - name: questions dtype: string splits: - name: train num_bytes: 20293805 num_examples: 18896 - name: validation num_bytes: 2376313 num_examples: 2067 download_size: 12600387 dataset_size: 22670118 annotations_creators: - crowdsourced language: - en language_creators: - crowdsourced license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Question Generation for T5 based on Squad V1.1 size_categories: - 10K<n<100K source_datasets: - extended|squad tags: - questiongeneration - question-generation - text2text-generation task_categories: - text2text-generation task_ids: [] --- # Dataset Card for "squad-v1.1-t5-question-generation" ## Dataset Description - **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/) - **Paper:** [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### Dataset Summary This is a modified Stanford Question Answering Dataset (SQuAD) to suit question generation with All Questions in One Line (AQOL) just like in [Transformer-based End-to-End Question Generation](https://arxiv.org/pdf/2005.01107v1.pdf) specifically for the T5 family of models. The prefix is `generate questions: ` so that the task can be unique to a trained model. Check out the generation notebook [here](https://nbviewer.org/urls/huggingface.co/datasets/derek-thomas/squad-v1.1-t5-question-generation/resolve/main/Squad_V1_Question_Generation.ipynb). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages ## Dataset Structure ### Data Instances #### plain_text An example of 'train' looks as follows. ``` { "context": "generate questions: This is a test context.", "question": "Is this a test? {sep_token} Is this another Test {sep_token}" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `context`: a `string` feature. - `question`: a `string` feature. ### Data Splits | name |train|validation| |----------|----:|---------:| |plain_text|18896| 2067| ### Citation Information ``` @article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, } ``` ### Contributions Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) and [Thomas Simonini](https://huggingface.co/ThomasSimonini) for adding this to the hub Check out: [How to contribute more](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Visitors [![Visitors](https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fderek-thomas%2Fsquad-v1.1-t5-question-generation&label=Visitors&countColor=%23263759)](https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fderek-thomas%2Fsquad-v1.1-t5-question-generation)
awacke1/ICD10-Clinical-Terminology
--- license: mit --- ICD10-Clinical-Terminology pyarrow fast search demonstration for context AI MMoE
IndianaUniversityDatasetsModels/Medical_reports_Splits
--- dataset_info: features: - name: MeSH dtype: string - name: Problems dtype: string - name: findings dtype: string - name: impression dtype: string splits: - name: train num_bytes: 1046536.8153707596 num_examples: 2831 - name: test num_bytes: 92417.59231462024 num_examples: 250 - name: validation num_bytes: 92417.59231462024 num_examples: 250 download_size: 395063 dataset_size: 1231372 task_categories: - text-generation - text2text-generation language: - en pretty_name: Indiana University X-Rays and Reports dataset size_categories: - 1K<n<10K --- # Dataset Card for "Medical_reports_Splits" Orignal Source [openi.nlm.nih.gov](https://openi.nlm.nih.gov/) Kaggle Source [Chest X-rays (Indiana University)](https://www.kaggle.com/datasets/raddar/chest-xrays-indiana-university) [For more information](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AyoubChLin/CNN_News_Articles_2011-2022
--- license: apache-2.0 task_categories: - text-classification language: - en pretty_name: CNN News Article from 20211 to 2022 size_categories: - n<1K dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': business '1': entertainment '2': health '3': news '4': politics '5': sport splits: - name: train num_examples: 32218 - name: test num_examples: 5686 train-eval-index: - config: default task: text-classification task_id: multi_class_classification splits: train_split: train eval_split: test col_mapping: text: text label: target --- # CNN News Articles 2011-2022 Dataset ## Introduction This dataset contains CNN News Articles from 2011 to 2022 after basic cleaning. The dataset includes the following information: Category Full text The data was downloaded from Kaggle at this URL: https://www.kaggle.com/datasets/hadasu92/cnn-articles-after-basic-cleaning. The dataset was split into two sets: Train set with 32,218 examples Test set with 5,686 examples ## Usage This dataset can be used for different natural language processing tasks such as text classification, text summarization, named entity recognition, and more. The dataset is available in Hugging Face Datasets with the ID AyoubChLin/CNN_News_Articles_2011-2022. ## Acknowledgements The data was collected by the Kaggle user [hadasu92](https://github.com/hadasu). The splitting of the dataset into train and test sets was performed by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/).
pain/Arabic-Tweets
--- license: cc-by-4.0 language: - ar --- # Dataset Card for Dataset Arabic-Tweets ## Dataset Description - **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus - **Paper:** https://ieeexplore.ieee.org/document/10022652 ### Dataset Summary This dataset has been collected from twitter which is more than 41 GB of clean data of Arabic Tweets with nearly 4-billion Arabic words (12-million unique Arabic words). ### Languages Arabic ### Source Data Twitter ### Example on data loading using streaming: ```py from datasets import load_dataset dataset = load_dataset("pain/Arabic-Tweets",split='train', streaming=True) print(next(iter(dataset))) ``` ### Example on data loading without streaming "It will be downloaded locally": ```py from datasets import load_dataset dataset = load_dataset("pain/Arabic-Tweets",split='train') print(dataset["train"][0]) ``` #### Initial Data Collection and Normalization The collected data comprises 100 GB of Twitter raw data. Only tweets with Arabic characters were crawled. It was observed that the new data contained a large number of Persian tweets as well as many Arabic words with repeated characters. Because of this and in order to improve the data efficiency the raw data was processed as follows: hashtags, mentions, and links were removed; tweets that contain Persian characters, 3 consecutive characters, or a singlecharacter word were dropped out; normalization of Arabic letters was considered. This has resulted in more than 41 GB of clean data with nearly 4-billion Arabic words (12-million unique Arabic words). ## Considerations for Using the Data - This data has been collected to create a language model. The tweets published without checking the tweets data. Therefore, we are not responsible for any tweets content at all. ### Licensing Information [Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @INPROCEEDINGS{10022652, author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha}, booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)}, title={MASC: Massive Arabic Speech Corpus}, year={2023}, volume={}, number={}, pages={1006-1013}, doi={10.1109/SLT54892.2023.10022652}} ```
pvduy/sharegpt_alpaca_oa_vicuna_format
--- dataset_info: features: - name: prompt dtype: string - name: label dtype: string splits: - name: train num_bytes: 494337138 num_examples: 324160 - name: test num_bytes: 5944776 num_examples: 1499 download_size: 263071058 dataset_size: 500281914 --- # Dataset Card for "sharegpt_alpaca_oa_vicuna_format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DongfuJiang/FeTaQA
--- license: mit task_categories: - table-question-answering - table-to-text - question-answering language: - en pretty_name: fetaqa size_categories: - 1K<n<10K --- This repo is the unofficial FeTA-QA dataset from paper [FeTaQA: Free-form Table Question Answering](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00446/109273/FeTaQA-Free-form-Table-Question-Answering). The original purpose to make it easier for users to download and use dataset. All the data is publicly avaliable on [their offical Github site](https://github.com/Yale-LILY/FeTaQA) If there is anything wrong, please raise an issue in the community and I will fix it if I am available.
WiktorS/polish-news
--- license: apache-2.0 task_categories: - text-classification - summarization - text-generation language: - pl size_categories: - 100K<n<1M --- This dataset contains more than 250k articles obtained from polish news site `tvp.info.pl`. Main purpouse of collecting the data was to create a transformer-based model for text summarization. Columns: * `link` - link to article * `title` - original title of the article * `headline` - lead/headline of the article - first paragraph of the article visible directly from the page * `content` - full textual contents of the article Link to original repo: https://github.com/WiktorSob/scraper-tvp Download the data: ```python from datasets import load_dataset dataset = load_dataset("WiktorS/polish-news") ```
AntiplagiatCompany/HWR200
--- license: apache-2.0 language: - ru tags: - ocr - htr - handwritten text recognition - near duplicate detection - reuse detection pretty_name: HWR200 size_categories: - 10K<n<100K --- # HWR200: New open access dataset of handwritten texts images in Russian This is a dataset of handwritten texts images in Russian created by 200 writers with different handwriting and photographed in different environment. ### How to download ``` pip install huggingface_hub apt-get install git-lfs git clone https://huggingface.co/datasets/AntiplagiatCompany/HWR200 ``` ### Description * Total size is 44G * Total number of images with text is 30030 * Number of writers is 200 * Every handwritten text is photographed in three different ways: scanned, in poor light, in good light * Different authors could write the same texts * Some texts are "reuses" . they have copies of sentences from other texts ### Annotation example ``` // for original texts: { sentences: [{id: <id>, text: <sentence>}, ...], words_count: <word count>, full_text: <full text> } // for reuse texts: { reuse_0: { sentences: [{id: <id>, text: <sentence>}, ...], id: <original text file name> intersection_score: <intersection_score> } reuse_1: { // if exists sentences: [{id: <id>, text: <sentence>}, ...], id: <original text file name> intersection_score: <intersection_score> } start clear sentences: [<sentence>, <sentence>, ...] // if exists end clear sentences: [<sentence>, <sentence>, ...] // if exists words_count: <word count> full_text: <full text> } // for fpr texts: { sentences: [{id: <id>, text: <sentence>}, ...], words_count: <word count>, full_text: <full text> } ```
snorkelai/snorkel-curated-instruction-tuning
--- license: apache-2.0 task_categories: - question-answering - text-generation language: - en size_categories: - 10K<n<100K --- ***<p style="font-size: 20px">Please check out our Blog Post - [How we built a better GenAI with programmatic data development](https://snorkel.ai/how-we-built-better-genai-with-programmatic-data-development/) for more details!</p>*** ## Summary `snorkel-curated-instruction-tuning` is a curated dataset that consists of high-quality instruction-response pairs. These pairs were programmatically filtered with weak supervision from open-source datasets [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1), and [Helpful Instructions](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions). To enhance the dataset, we also programmatically classified each instruction based on the InstructGPT paper. For a more comprehensive understanding of our methodology, please visit our [blog](https://snorkel.ai/how-we-built-better-genai-with-programmatic-data-development/). ## Dataset Overview & Methodology Instruction tuning is an important step in developing effective [large language models (LLMs)](https://snorkel.ai/large-language-models-llms/) for generative AI tasks. While proprietary datasets have been used by LLM-backed chatbots, the open-source community has created similar datasets accessible to everyone. However, the quality of responses collected by volunteers has been inconsistent, affecting the quality of open-source models. Furthermore, there is currently no standard classification of instructions across datasets (many lack classification altogether), which can complicate measurements of instruction diversity when compiling from multiple sources. Snorkel, with its expertise in converting noisy signals into high-quality supervision, addressed this issue by programmatically scoring, sampling, and filtering open-source datasets. The curated dataset and methodology are now available for public use. Please refer to our [blog](https://snorkel.ai/how-we-built-better-genai-with-programmatic-data-development/) for more details on methods and evaluation. ## File descriptions - `snorkel_curated_11k.jsonl`: 11k high-quality instruction-response pair selected from the mentioned open-source dataset. This is then used to instruction-tune the [snorkelai/RedPajama-7B-Chat-Curated](https://huggingface.co/snorkelai/RedPajama-7B-Chat-Curated/). - `snorkel_hold_out_set.jsonl`: A hold-out set for evaluation, comparing human preferences between models. ## Intended Uses - Instruction-tuning LLMs For more detailed information, please refer to our blog post available at [How we built a better GenAI with programmatic data development](snorkel.ai/how-we-built-a-better-genai-with-programmatic-data-development). ## License/Attribution **Copyright (2023) Snorkel AI, Inc.** This dataset was developed at [Snorkel AI](https://snorkel.ai/) and its use is subject to the Apache 2.0 license. This work comes with the collaboration with Together Computer in releasing the [snorkelai/RedPajama-7B-Chat-Curated](https://huggingface.co/snorkelai/RedPajama-7B-Chat-Curated/) model. Please refer to the licenses of the data subsets you use. - [Open Assistant](https://huggingface.co/datasets/OpenAssistant/oasst1) is under Apache 2.0 license. - [Helpful Instructions](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions) is under Apache 2.0 license. - [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) is under CC BY-SA 3.0 license. Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license: Wikipedia (various pages) - https://www.wikipedia.org/ Copyright © Wikipedia editors and contributors. Databricks (https://www.databricks.com) Copyright © Databricks ## Language English ## Version Version: 1.0 To cite this dataset, please use: ``` @software{snorkel2023instructiontuning, author = {Snorkel AI}, title = {Applying programmatic data development to Generative AI with Snorkel}, month = June, year = 2023, url = {https://huggingface.co/datasets/snorkelai/snorkel-curated-instruction-tuning} } ``` **Owner: Snorkel AI, Inc.** ## Community Join us on [Snorkel AI Slack](snorkel.ai/slack)
jondurbin/airoboros-gpt4-1.2
--- license: cc-by-nc-4.0 --- A continuation of [gpt4-1.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with: * over 1000 new coding instructions, along with several hundred prompts using `PLAINFORMAT` to *hopefully* allow non-markdown/backtick/verbose code generation * nearly 4000 additional math/reasoning, but this time using the ORCA style "[prompt]. Explain like I'm five." / Justify your logic, etc. * several hundred roleplaying data * additional misc/general data ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
Ali-C137/Arabic_guanaco_oasst1
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 20962143 num_examples: 9846 - name: test num_bytes: 1102534 num_examples: 518 download_size: 10417464 dataset_size: 22064677 license: apache-2.0 language: - ar size_categories: - 1K<n<10K --- # Dataset Card for "Arabic_guanaco_oasst1" This dataset is the openassistant-guanaco dataset a subset of the Open Assistant dataset translated to Arabic. You can find the original dataset here: https://huggingface.co/datasets/timdettmers/openassistant-guanaco Or the main dataset here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples. For further information, please see the main dataset. License: Apache 2.0 [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fazni/roles-based-on-skills
--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: Role dtype: string - name: text dtype: string - name: label dtype: int64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 2272289 num_examples: 3660 - name: test num_bytes: 577048 num_examples: 916 download_size: 1174905 dataset_size: 2849337 ---
Waterhorse/chess_data
--- license: apache-2.0 task_categories: - text-generation - conversational language: - en --- # The Chess Dataset ## Dataset Description - **Paper:** [ChessGPT: Bridging Policy Learning and Language Modeling](https://arxiv.org/abs/2306.09200) ### Dataset Summary The dataset consists of three sources of dataset described in the paper, including: - **ChessCLIP dataset**: Annotated PGNs for training CLIP. - **ChessGPT Base dataset**: Game dataset, language dataset and mixed dataset for training ChessGPT-Base. - **ChessGPT Chat dataset**: Conversational dataset for training ChessGPT-Chat. Because of the legal issue, for ChessGPT dataset, we do not open-source the chess-book, chess-forum, chess-blog, and Youtube transcript datasets. And for ChessCLIP dataset, we do not open-source two commercial annotated datasets we use. ### Languages The language of the data is primarily English. ## Dataset Structure - **ChessCLIP dataset**: Annotated PGNs for training CLIP. - **ChessGPT Base dataset**: Game dataset: ccrl, pro_player, lichess_db_37, chess_puzzles, chess_modeling. Language dataset: redpajama, oscar, c4, pile, wikipedia, and stackexchange, and mixed dataset: annotated_pgn. - **ChessGPT Chat dataset**: Chess-related conversation dataset: ### Data Instances - **ChessCLIP dataset**: ```python [Event "GMA, Wijk aan Zee NED"] [Site "?"] [Date "2003.??.??"] [Round "1"] [White "Anand,V"] [Black "Radjabov,T"] [Result "1/2"] [WhiteElo "2750"] [BlackElo "2620"] [ECO "C12"] [PlyCount "55"] [Annotator "Hathaway"] 1. e4 e6 { I'm not terribly familiar with the style of Radjabov, so I don't know if this is his usual opening. } 2. d4 d5 3. Nc3 Nf6 (3...Bb4 { The Winawer Variation is probably best, though not as easy to play. }) 4. Bg5 { threatens e4-e5xf6 } (4. e5 { keeps pieces on the board and avoids ...dxe4 }) 4...Bb4 (4...Be7 { is more common and aims to trade dark-square bishops to ease Black's cramp }) (4...dxe4 { aims to avoid any cramp by bringing pieces into alignment for trading, though White does get at least one very good piece (Ne4 or Bg5) and an easier time castling queen-side, to stir up king-side threats } ... ``` - **ChessGPT dataset**: ```python {"metadata": {"id": "65268522", "title": "1923 USSR Chess Championship", "url": "https://en.wikipedia.org/wiki/1923%20USSR%20Chess%20Championship"}, "text": "The 1923 USSR Chess Championship was the second edition of USSR Chess Championship. Held from 8 to 24 July in Petrograd. The tournament was won by Peter Romanovsky. "pipeline_key": "wiki-squad-train.jsonl.zst-10549" } ``` - **ChessGPT Chat dataset**: ```python {"conversations": [ {"author": "Human 0", "text": "Analyze the given sentence and identify its grammatical errors, if any.\nThese type of problems are difficult to solve.", "url": "gpt-4-alpaca", "score": 0}, {"author": "Human 1", "text": "The sentence is incorrect. The error is in the usage of \"these\". Since \"these\" is a plural determiner, it should be used with a plural noun. Similarly, \"type\" is singular noun and it is used with the singular determiner, \"this\".\n\nThe correct sentence should be \"This type of problem is difficult to solve.\" or \"These types of problems are difficult to solve.\"", "url": "gpt-4-alpaca", "score": 0} ] } ``` ### Data Splits The data is unsplit. ## Dataset Creation Check [ChessGPT: Bridging Policy Learning and Language Modeling](https://arxiv.org/abs/2306.09200) for more details. ### Licensing Information **Annotated PGN**: [PGNlib](https://www.angelfire.com/games3/smartbridge/), [lichess](https://lichess.org/terms-of-service), [GameKnot](https://gameknot.com/pg/pol_eula.htm), [pathtomaster](https://www.pathtochessmastery.com/) **Game Dataset**: [Lichess dataset](https://www.tldrlegal.com/license/creative-commons-cc0-1-0-universal), [CCRL](https://ccrl.chessdom.com/ccrl/), [pro-player](https://www.pgnmentor.com/files.html), [puzzle](https://www.tldrlegal.com/license/creative-commons-cc0-1-0-universal), Modeling data(Apache-2.0). **Language Dataset** [Wikipedia](https://huggingface.co/datasets/wikipedia#licensing-information), [Redpajama](https://github.com/togethercomputer/RedPajama-Data#license), [Oscar](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information), [Pile](https://github.com/EleutherAI/the-pile/blob/master/LICENSE), [StackExchange](https://archive.org/details/stackexchange), [C4](https://huggingface.co/datasets/allenai/c4#license) **Conversatoinal Datset**: [Chessable forums](https://www.chessable.com/terms), [Reddit](https://www.redditinc.com/policies/data-api-terms), [gpt-4](https://openai.com/policies/terms-of-use), [sharegpt](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb), oasst1(Apache-2.0), dolly-v2(MIT) ### Citation Information ```bash @article{feng2023chessgpt, title={ChessGPT: Bridging Policy Learning and Language Modeling}, author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun}, journal={arXiv preprint arXiv:2306.09200}, year={2023} } ```
bayartsogt/test_dlub_2023
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 6600 num_examples: 38 - name: test num_bytes: 7153 num_examples: 38 download_size: 0 dataset_size: 13753 --- # Dataset Card for "test_dlub_2023" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
psymon/namuwiki_alpaca_dataset
--- license: cc-by-nc-sa-2.0 language: - ko --- ## namuwiki for Stanford Alpaca 나무위키 덤프 파일을 Stanford Alpaca 학습에 맞게 수정한 데이터셋입니다. 데이터 형식은 Stanford Alpaca와 동일합니다. instruction은 '나무위키 문서 제목' + '에 대해 설명해줘.' 형태이고,<br> output은 문서 == 개요 == 에 해당하는 내용입니다. 개요가 없는 항목, 개요가 너무 짧은 항목은 제외하였습니다.
flozi00/conversations
--- language: - de task_categories: - conversational - text-generation dataset_info: features: - name: raw dtype: string - name: from dtype: string - name: labels dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: first_message dtype: string - name: first_answer dtype: string splits: - name: train num_bytes: 80567935.1091266 num_examples: 23275 download_size: 46600297 dataset_size: 80567935.1091266 configs: - config_name: default data_files: - split: train path: data/train-* --- This dataset is an uncensored and massively cleaned, double checked merge of several german datasets / subsets The mission of this work is building an high quality dataset for the german llm community. This repo is continously updated and old parts being replaced with never. Quality for Quantity https://github.com/flozi00/chat-data-experiments/blob/main/chat_combiner.py
zan/lima-ja
--- license: cc-by-nc-sa-4.0 language: - ja --- # Dataset Card for LIMA-JA ## Dataset Description This is Japanese LIMA dataset, which is translated from the [LIMA dataset](https://huggingface.co/datasets/GAIR/lima) that Meta's [LIMA model](https://arxiv.org/abs/2305.11206) (Zhou et al., 2023) was trained on. In this process, after the Japanese translation is performed by ChatGPT, I am making some minor adjustments. ## Usage ```python >>> from datasets import load_dataset >>> dataset = load_dataset('zan/lima-ja') >>> # dataset = load_dataset('zan/lima-ja', "v1") ``` ```python >>> dataset["train"][0] {'conversations': ['脳細胞は移動することができるのでしょうか? 移動とは、遠距離の移動(可能ならば脳内のみで)を指します。', 'この質問は比較的広範であり、脳は単にニューロンだけでなく、グリア細胞(支持細胞)や前ミトーシスの神経幹細胞も含まれることを考慮する必要があります。さらに、重要な同僚科学者たちが指摘しているように、発生段階も非常に重要であり、発達中の胎児の脳は成人の脳とは非常に異なります。\n\nしかし、さまざまな出版物を精査した結果、この質問の答えは実際には非常に単純であることがわかりました。結論として、脳細胞は移動します。\n\n成人の脳では、グリア細胞が脳内を移動します(Klämbt、2009年)。グリア細胞は多くの機能に関与していますが、移動するグリア細胞の注目すべき例は、標的の軸索を見つけるために比較的長距離を移動し、それに巻きついて絶縁性のミエリン鞘を形成するオリゴデンドロサイトです(TsaiとMiller、2002年)。\n\n神経幹細胞は損傷に対応して長距離を移動します(Imitola et al.、2004年)し、特定の幹細胞の位置(例えば、海馬や脳室下帯)から他の領域に移動します(Clarke、2003年)。\n\nミトーシス後でありながら未分化のニューロンは、魚(Scott et al.、2012年)や哺乳類、および非人猿でも成人の脳内を移動することが示されています(Sawada et al.、2011年)。\n\n驚くことではありませんが、グリア細胞、幹細胞、およびニューロンは胎児の発生中も移動します。特に、末梢機能を果たすために運命づけられた分裂後のニューロンは、神経堤から標的の位置まで比較的長い距離を移動しなければなりません(Neuroscience、第2版、Neuronal Migration)。'], 'source': 'stackexchange'} ``` ## Version Description ## v1 A version that has been modified by adding about 100 changes after being translated by ChatGPT. ## v2 more modified version (Coming soon...) ## License If the source data of LIMA has a stricter license than CC BY-NC-SA, the LIMA dataset follows the same. Otherwise, it follows the CC BY-NC-SA license. ## Citation Information ``` @InProceedings{huggingface:dataset, title = {LIMA-JA: Japanese LIMA Dataset for Efficient Instruction-tuning}, author = {zan}, year = {2023} } ```
ssbuild/alpaca_finance_en
--- license: apache-2.0 ---
Vezora/Mini_Orca_Uncencored_Alpaca
--- license: apache-2.0 --- This is dataset is a modified version of "psmathur's" Mini orca dataset, formated in the alpaca format and uncencored. For ALPACA LORA users: Modules you can target with lora:"gate_proj", "down_proj", "up_proj", "q_proj", "v_proj", "k_proj", "o_proj" Most lora models use:"q_proj", "v_proj", "k_proj", "o_proj" Platypus which got terrific results: "gate_proj", "down_proj", "up_proj" Research on targeting certain modules still needs to be done, but if you don't want to train over a previously trained models newly learned abilities, target different modules than the ones used for original training. Hyper perameters used by Platypus: Hyperparameters for 13B and 70B Models Hyperparameter Platypus2-13B / 70B batch size 16 micro batch size 1 num epochs 1 learning rate 4e-4 / 3e-4 cutoff len 4096 lora rank 16 lora alpha 16 lora dropout 0.05 lora target modules gate_proj, down_proj, up_proj train on inputs False add eos token False group by length False prompt template alpaca lr scheduler cosine warmup steps 100 I would reccomend using a batch size of 4-10, and cutt off length to ≤ 2048 to avoid using vram issues. Load_in_4bit, Normal Float, and bf16. For single 24 gig card. If training with oobabooga you must edit the "training.py" file in the "oobabooga_windows\text-generation-webui\modules" folder. In line 49 edit standard modules to the modules you would like to target. If training with alpaca lora use the argument --lora_target_modules when running the train.py command. To load in 4bit you must edit the train file, adding load in 4 bit, bf16, and normal float quant.
fhirfly/medicalquestions
--- license: mit task_categories: - text-classification language: - en tags: - medical pretty_name: FhirFly Medical Questions size_categories: - 10K<n<100K --- # 🤗 Dataset Card: fhirfly/medicalquestions ## Dataset Overview - Dataset name: fhirfly/medicalquestions - Dataset size: 25,102 questions - Labels: 1 (medical), 0 (non-medical) - Distribution: Evenly distributed between medical and non-medical questions ## Dataset Description The fhirfly/medicalquestions dataset is a collection of 25,102 questions labeled as either medical or non-medical. The dataset aims to provide a diverse range of questions covering various medical and non-medical domains. The questions in the dataset have been manually labeled by domain experts based on the context and content of each question. Each question is assigned a label of 1 if it is determined to be a medical question and a label of 0 if it is classified as a non-medical question. ## Dataset Structure The dataset consists of a single file containing the following columns: - **Text**: The text of the question. - **Label**: The label assigned to each question, either 1 (medical) or 0 (non-medical). The questions are evenly distributed between medical and non-medical categories, ensuring a balanced dataset for training and evaluation. ## Potential Biases Efforts have been made to ensure that the dataset is representative of various medical and non-medical topics. However, it is important to acknowledge that biases may exist in the dataset due to the subjective nature of labeling questions. Biases could be present in terms of the types of questions included, the representation of certain medical conditions or non-medical topics, or the labeling process itself. It is recommended to perform thorough evaluation and analysis of the dataset to identify and mitigate potential biases during model training and deployment. Care should be taken to address any biases to ensure fair and unbiased predictions. ## Dataset Quality The fhirfly/medicalquestions dataset has undergone manual labeling by domain experts, which helps maintain a high level of quality and accuracy. However, human labeling is not entirely immune to errors or subjectivity. To ensure the quality of the dataset, a thorough review process has been conducted to minimize errors and maintain consistency in labeling. Nonetheless, it is advisable to validate and verify the data as part of your specific use case to ensure it meets your requirements. ## Data License The fhirfly/medicalquestions dataset is released under the MIT license. Please refer to the license file accompanying the dataset for more information on its usage and any restrictions that may apply. ## Dataset Citation If you use the fhirfly/medicalquestions dataset in your work, please cite it as: ``` @dataset{fhirfly/medicalquestions, title = {fhirfly/medicalquestions}, author = {fhirfly}, year = {2023}, publisher = {Hugging Face}, version = {1.0.0}, url = {https://huggingface.co/datasets/fhirfly/medicalquestions} } ```
Amani27/massive_translation_dataset
--- configs: - config_name: default data_files: - split: train path: "train.csv" - split: validation path: "validation.csv" - split: test path: "test.csv" license: cc-by-4.0 task_categories: - translation language: - en - de - es - hi - fr - it - ar - nl - ja - pt size_categories: - 10K<n<100K --- # Dataset Card for Massive Dataset for Translation ### Dataset Summary This dataset is derived from AmazonScience/MASSIVE dataset for translation task purpose. ### Supported Tasks and Leaderboards Translation ### Languages 1. English (en_US) 2. German (de_DE) 3. Hindi (hi_IN) 4. Spanish (es_ES) 5. French (fr_FR) 6. Italian (it_IT) 7. Arabic (ar_SA) 8. Dutch (nl_NL) 9. Japanese (ja_JP) 10. Portugese (pt_PT)
parrotzone/sdxl-1.0
--- license: openrail++ --- # check [sdxl.parrotzone.art](https://sdxl.parrotzone.art) for easy viewing ⋆。°✩ --- ## all images were made with SDXL 1.0 + the 0.9 VAE - steps: 20 - cfg scale: 7 - no refiner - random seeds
longface/logicLM
--- license: apache-2.0 task_categories: - question-answering size_categories: - 1K<n<10K ---
Daoguang/CodeM-Multilinugal-Data
--- license: apache-2.0 configs: - config_name: default data_files: - split: Python path: "python.json" - split: JavaScript path: "js.json" - split: TypeScript path: "ts.json" - split: C path: "c.json" - split: Cpp path: "cpp.json" - split: Java path: "java.json" - split: Go path: "go.json" - split: HTML path: "html.json" - split: Mixed path: "mixed.json" language: - en pretty_name: CodeM_data size_categories: - 10K<n<100K --- # CodeM: Can Programming Languages Boost Each Other via Instruction Tuning? [Paper](https://arxiv.org/pdf/2308.16824.pdf) [GitHub](https://github.com/NL2Code/CodeM/tree/main/data) ## Abstract When human programmers have mastered a programming language, it would be easier when they learn a new programming language. In this report, we focus on exploring whether programming languages can boost each other during the instruction fine-tuning phase of code large language models. We conduct extensive experiments of 8 popular programming languages (Python, JavaScript, TypeScript, C, C++, Java, Go, HTML) on StarCoder. Results demonstrate that programming languages can significantly improve each other. For example, CodeM-Python 15B trained on Python is able to increase Java by an absolute 17.95% pass@1 on HumanEval-X. More surprisingly, we found that CodeM-HTML 7B trained on the HTML corpus can improve Java by an absolute 15.24% pass@1. Our training data is released at [this https URL](https://huggingface.co/datasets/Daoguang/CodeM-Multilinugal-Data). ## Usage ```python from datasets import load_dataset # load CodeM's training data dataset = load_dataset("Daoguang/CodeM-Multilinugal-Data") ``` ## Reference ``` @misc{zan2023codem, title={Can Programming Languages Boost Each Other via Instruction Tuning?}, author={Daoguang Zan and Ailun Yu and Bo Shen and Jiaxin Zhang and Taihong Chen and Bing Geng and Bei Chen and Jichuan Ji and Yafen Yao and Yongji Wang and Qianxiang Wang}, year={2023}, eprint={2308.16824}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Vision-Flan/vision-flan_191-task_1k
--- task_categories: - visual-question-answering language: - en pretty_name: Vision-Flan size_categories: - 100K<n<1M --- # 🚀 Vision-Flan Dataset vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task. It is constructed for visual instruction tuning and for building large-scale vision-language models. ## Paper or blog for more information: https://github.com/VT-NLP/MultiInstruct/ https://vision-flan.github.io/ *Paper coming soon* 😊 ## Citation *Paper coming soon* 😊. If you use Vision-Flan, please use the following cites: ``` @misc{visionFlan2023, title = {Vision-Flan:Scaling Visual Instruction Tuning}, url = {https://vision-flan.github.io/}, author = {Zhiyang Xu and Trevor Ashby and Chao Feng and Rulin Shao and Ying Shen and Di Jin and Qifan Wang and Lifu Huang}, month = {Sep}, year = {2023} } ``` ``` @inproceedings{DBLP:conf/acl/XuSH23, author = {Zhiyang Xu and Ying Shen and Lifu Huang}, editor = {Anna Rogers and Jordan L. Boyd{-}Graber and Naoaki Okazaki}, title = {MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning}, booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2023, Toronto, Canada, July 9-14, 2023}, pages = {11445--11465}, publisher = {Association for Computational Linguistics}, year = {2023}, url = {https://doi.org/10.18653/v1/2023.acl-long.641}, doi = {10.18653/v1/2023.acl-long.641}, timestamp = {Thu, 10 Aug 2023 12:35:59 +0200}, biburl = {https://dblp.org/rec/conf/acl/XuSH23.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ## License: Please carefully check the licenses for all the datasets on this [page](https://vision-flan.github.io/tasks.html) before use. ## Contact: If you have any questions or concerns please contact us at zhiyangx@vt.edu .
Intuit-GenSRF/es_mental_health_counseling
--- dataset_info: features: - name: Context dtype: string - name: Response dtype: string - name: split dtype: string - name: text dtype: string - name: text_spanish dtype: string splits: - name: train num_bytes: 13763461 num_examples: 3512 download_size: 7425319 dataset_size: 13763461 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "es_mental_health_counseling" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Mxode/C-Language-Chat-Debug-Multiturn-Zh
--- license: apache-2.0 task_categories: - question-answering - conversational language: - zh tags: - code - C - chat - assistant - multiturn - debug --- 约 1300 条 **C 语言** 场景的 user - assistant **多轮对话**。每段对话已经组织成了单行的格式。一条样例如下: ```JSON { "id": 1045, "conversation": [ { "user": "你好,AI助手。我最近在写一个C语言程序,但是遇到了一些问题,希望你能帮我检查一下。", "assistant": "你好,我很乐意帮助你。请把你的代码发给我,我会尽快检查并给出建议。" }, { "user": "好的,这是我的代码。这段代码的主要功能是计算斐波那契数列的前n项。", "assistant": "让我看一下......嗯,这里有一个小错误。在第10行,你应该使用`++i`而不是`i++`来递增i的值。修改后的代码应该是这样的\\n```c\\nfor (int i = 0; i < n; ++i) {\\n if (i == 0 || i == 1) {\\n printf(\\\"%d \\\", fib[i]);\\n } else {\\n fib[i] = fib[i - 1] + fib[i - 2];\\n printf(\\\"%d \\\", fib[i]);\\n }\\n}\\n```" }, { "user": "谢谢你的帮助!我还有一个问题。这段代码中有一个递归函数,用于计算斐波那契数列。但是我发现这个递归函数的效率很低,因为它会重复计算很多相同的子问题。有什么方法可以优化这个递归函数吗?", "assistant": "当然可以。你可以尝试使用动态规划的方法来优化这个递归函数。这样可以将已经计算过的子问题的解存储起来,避免重复计算。下面是一个使用动态规划优化后的斐波那契数列计算函数\\n```c\\n#include <stdio.h>\\nint fib[100]; // 用于存储斐波那契数列的数组\\nvoid fib_dp(int n) {\\n fib[0] = 0;\\n fib[1] = 1;\\n for (int i = 2; i <= n; ++i) {\\n fib[i] = fib[i - 1] + fib[i - 2];\\n }\\n}\\nint main() {\\n int n = 10; // 你想要计算的斐波那契数列的项数\\n fib_dp(n);\\n for (int i = 0; i <= n; ++i) {\\n printf(\\\"%d \\\", fib[i]);\\n }\\n return 0;\\n}\\n```" }, { "user": "哇,这真是太棒了!你的建议大大提高了程序的运行速度。再次感谢你的帮助!\n", "assistant": "" } ] } ``` 所有的助手称呼均为“AI助手”,如果想要构建特定名称,**可以全局替换“AI助手”。**
chargoddard/chai-dpo
--- configs: - config_name: default data_files: - split: train path: data/train-* - config_name: unrolled data_files: - split: train path: unrolled/train-* dataset_info: - config_name: default features: - name: history list: - name: sender dtype: string - name: value dtype: string - name: rejected sequence: string - name: accepted dtype: string - name: thumbs_up dtype: bool - name: submission_id dtype: string - name: model_name dtype: string - name: bot_id dtype: string splits: - name: train num_bytes: 223007429 num_examples: 113263 download_size: 60868294 dataset_size: 223007429 - config_name: unrolled features: - name: history list: - name: sender dtype: string - name: value dtype: string - name: rejected dtype: string - name: accepted dtype: string - name: thumbs_up dtype: bool - name: submission_id dtype: string - name: model_name dtype: string - name: bot_id dtype: string splits: - name: train num_bytes: 361172645 num_examples: 198719 download_size: 61083616 dataset_size: 361172645 --- # Dataset Card for "chai-dpo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
defog/wikisql
--- dataset_info: features: - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: labels sequence: int64 - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 5525298 num_examples: 1000 download_size: 761250 dataset_size: 5525298 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "wikisql" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Javiparmu/ui-icons
--- license: gpl-3.0 dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 23916866.0 num_examples: 996 download_size: 21285499 dataset_size: 23916866.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
keirp/hungarian_national_hs_finals_exam
--- configs: - config_name: default data_files: - split: test path: test.csv dataset_info: features: - name: Question dtype: string splits: - name: test num_bytes: 9300 num_examples: 33 download_size: 6283 dataset_size: 9300 --- # Testing Language Models on a Held-Out High School National Finals Exam When xAI recently released [Grok-1](https://x.ai/), they evaluated it on the 2023 Hungarian national high school finals in mathematics, which was published after the training data cutoff for all the models in their evaluation. While MATH and GSM8k are the standard benchmarks for evaluating the mathematical abilities of large language models, **there are risks that modern models overfit to these datasets**, either from training directly on the test sets or from tuning the model hyperparameters to maximize test set performance. By evaluating on a truly held out test set, we can better guage the mathematical performance of these models. We evaluate on the [2023 Hungarian national high school finals in mathematics](https://dload-oktatas.educatio.hu/erettsegi/feladatok_2023tavasz_kozep/k_matang_23maj_fl.pdf) and grade by hand using [the provided rubric](https://dload-oktatas.educatio.hu/erettsegi/feladatok_2023tavasz_kozep/k_matang_23maj_ut.pdf). All model solutions were graded by myself over the course of one day. Model solutions were sampled using temperature 0.1. For base models such as Code Llama, Llemma, and Mistral-7B, a 5-shot prompt was used. For instruction tuned models, we used the default prompt template for that model. ## Results **Note**: In an earlier version of the LaTeX transcription of the exam, question 14a had incorrect formatting and question 14b did not contain all the required information to solve the problem. These issues have been fixed and the numbers are updated. | Model | Exam Score | GSM8k | MATH | | ------------------------------------------------------------------------------ | ---------- | ------ | ------ | | [Code Llama 7B](https://huggingface.co/codellama/CodeLlama-7b-hf) (few-shot) | 8\% | 10.5% | 4.5% | | [MetaMath 7B](https://huggingface.co/meta-math/MetaMath-7B-V1.0) | 20\% | 66.5\% | 19.8\% | | [MAmmoTH 7B](https://huggingface.co/TIGER-Lab/MAmmoTH-7B) | 17\% | 50.5\% | 10.4\% | | [MAmmoTH Coder 7B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B) | 11\% | 22.5\% | 7.9\% | | [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) (few-shot) | 23\% | 36.4\% | 18\% | | - | - | - | - | | [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) (few-shot) | 22\% | 39.2\% | - | | [MetaMath Mistral 7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) | 29\% | 77.7\% | 28.2\% | | [OpenChat 3.5](https://huggingface.co/openchat/openchat_3.5) | 37\% | 77.3\% | 28.6\% | | - | - | - | - | | [Code Llama 34B](https://huggingface.co/codellama/CodeLlama-34b-hf) (few-shot) | 15\% | 29.6\% | 12.2\% | | [MAmmoTH Coder 34B](https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-34B) | 17\% | 34.3\% | 11.6\% | | [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) (few-shot) | 43\% | 51.5\% | 25.0\% | | - | - | - | - | | [Qwen 7B](https://huggingface.co/eleutherai/qwen-7b) | 22\% | 51.7\% | 11.6\% | | - | - | - | - | | [Cohere Command](https://cohere.com/) | 18\% | - | - | | [GPT-3.5 Turbo](https://openai.com/) | 41\% | 57.1\% | 23.5\% | | [GPT-4](https://openai.com/) | 68\% | 92.0\% | 42.5\% | | [Claude 2](https://www.anthropic.com/) | 55\% | 88.0\% | - | | [Grok-0 (33B)](https://x.ai/) | 37\% | 56.8\% | 15.7\% | | [Grok-1](https://x.ai/) | 59\% | 62.9\% | 23.9\% | ## Observations 1. Plotting GSM8k performance versus performance on the exam, we can see clear evidence that several models overfit to the benchmark. ![GSM8k vs Exam](assets/gsm8k.png) 2. Despite [claiming](https://huggingface.co/openchat/openchat_3.5#comparison-with-xai-grok-models) that OpenChat 3.5 is competitive with Grok-1, it only gets around half the score on the held-out exam, indicating that is simply overfits to evaluations. 3. Llemma 34B is competitive with GPT-3.5 Turbo on the held-out exam. Further instruction tuning Llemma 34B should give even greater performance. ## Solutions Please find model solutions and corresponding grades in the `solutions` folder. ## Running the Evaluation To run the evaluation, run the following command: ```bash python run_exam.py --model EleutherAI/llemma_34b --exam test/exam.csv --prompt few_shot ``` ## Notes on Grading There are a few problems which either require creating or reading a figure. For these problems, I graded the model solutions as incorrect. In the future when models have these abilities, this should be changed. ## Citation To cite this article, use the following citation: ```bibtex @misc{testing_language_models_on_a_held_out_high_school_national_finals_exam, title={Testing Language Models on a Held-Out High School National Finals Exam}, author={Keiran Paster}, howpublished={\url{https://huggingface.co/datasets/keirp/hungarian_national_hs_finals_exam}}, journal = {HuggingFace repository}, year={2023}, } ```
fnlp/character-llm-data
--- license: cc-by-nc-4.0 --- # Character-LLM: A Trainable Agent for Role-Playing This is the training datasets for Character-LLM, which contains nine characters experience data used to train Character-LLMs. To download the dataset, please run the following code with Python, and you can find the downloaded data in `/path/to/local_dir`. ```python from huggingface_hub import snapshot_download snapshot_download( local_dir_use_symlinks=True, repo_type="dataset", repo_id="fnlp/character-llm-data", local_dir="/path/to/local_dir") ``` The `prompted/` contains datasets that can be used for supervised fine-tuning directly. And `generated/` consists of raw data that generated by gpt-3.5-turbo, which can be converted into `prompted` style. Here is the statistics of the training data. | | # Scenes | # Words | # Turns | |----------------------|---------|--------|--------| | Cleopatra VII | 1.4K | 723K | 14.3 | | Lord Voldemort | 1.4K | 599K | 13.1 | | Spartacus | 1.4K | 646K | 12.3 | | Hermione Granger | 1.5K | 628K | 15.5 | | Isaac Newton | 1.6K | 772K | 12.6 | | Julius Caesar | 1.6K | 820K | 12.9 | | Ludwig van Beethoven | 1.6K | 663K | 12.2 | | Socrates | 1.6K | 896K | 14.1 | | Martin Luther King | 2.2K | 1,038K | 12.0 | | Avg. | 1.6K | 754K | 13.2 |
AmanMussa/kazakh-instruction-v2
--- license: mit task_categories: - question-answering - text-generation language: - kk size_categories: - 10K<n<100K --- # Dataset Card for Dataset Name Self-instruct data pairs for Kazakh language ## Dataset Details The dataset is translated from Standford Alpaca instruction dataset via Google Translations API. 1. Manually fixed the translation error. 2. Common names and places of Kazakhstan were added. 3. Intructions of kazakhstan history and cultures were added. ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** Mussa Aman - **Language(s) (NLP):** Kazakh - **License:** MIT ## Uses This dataset is curated to fine-tune the LLaMA 2 model for the Kazakh language. It aims to enhance the model's understanding and processing capabilities of Kazakh, addressing a gap in the Low Resource Lanuguages for solving the NLP resources for Kazakh language. The dataset includes the self-instruct approach, there is commonly one "instruction","input" and "output" which is crucial for improving language comprehension and task performance of the model. ## Citation **BibTeX:** @misc{aman_2023, author = {Aman Mussa}, title = {Self-instruct data pairs for Kazakh language}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1}}, } **APA:** Aman, M. (2023). Self-instruct data pairs for Kazakh language. Retrieved from https://huggingface.co/datasets/AmanMussa/instructions_kaz_version_1 ## Dataset Card Contact Please contact in email: a_mussa@kbtu.kz
glitchbench/GlitchBench
--- dataset_info: features: - name: image dtype: image - name: id dtype: string - name: reddit dtype: string - name: glitch-type dtype: string - name: game dtype: string - name: source dtype: string - name: description dtype: string - name: __index_level_0__ dtype: int64 splits: - name: validation num_bytes: 686309290 num_examples: 607 download_size: 686303027 dataset_size: 686309290 license: mit task_categories: - image-to-text language: - en tags: - Video Game - Glitch pretty_name: GlitchBench size_categories: - n<1K --- # GlitchBench This repository contains the dataset for the paper [`GlitchBench: Can large multimodal models detect video game glitches?`](https://arxiv.org/abs/2312.05291) <div align="center"> <p > by <a href="https://taesiri.ai">Mohammad Reza Taesiri</a>, Tianjun Feng <a href="https://anhnguyen.me/research/">Anh Nguyen</a>, and <a href="https://asgaard.ece.ualberta.ca/">Cor-Paul Bezemer</a> </p> <p > (CVPR 2024) </p> </div> ## Abstract Large multimodal models (LMMs) have evolved from large language models (LLMs) to integrate multiple input modalities, such as visual inputs. This integration augments the capacity of LLMs in tasks requiring visual comprehension and reasoning. However, the extent and limitations of their enhanced abilities are not fully understood. To address this gap, we introduce GlitchBench, a novel benchmark designed to test and evaluate the common-sense reasoning and visual recognition capabilities of large multimodal models. Our dataset is curated from a variety of unusual, infrequent, and glitched scenarios from video game content and aims to challenge both the visual and linguistic reasoning powers of LMMs in detecting and interpreting out-of-the-ordinary events and scene composition.
silk-road/ChatHaruhi-Expand-118K
--- license: cc-by-4.0 task_categories: - text-generation - text2text-generation language: - zh - en size_categories: - 100K<n<1M --- # ChatHaruhi Expanded Dataset 118K 62663 instance from original ChatHaruhi-54K 42255 English Data from RoleLLM 13166 Chinese Data from **github repo:** https://github.com/LC1332/Chat-Haruhi-Suzumiya Please star our github repo if you found the dataset is useful ## Regenerate Data If you want to regenerate data with different context length, different embedding model or using your own chracter now we refactored the final data generating pipeline RoleLLM Data was generated by https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E4%BB%8EHF%E6%95%B0%E6%8D%AE%E6%A0%BC%E5%BC%8F%E5%BC%80%E5%A7%8B%E7%BB%84%E7%BB%87%E6%95%B0%E6%8D%AE(RoleLLM).ipynb Chinese Data was generated by https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E9%A2%9D%E5%A4%96%E4%B8%AD%E6%96%87%E6%95%B0%E6%8D%AE.ipynb 63K Haruhi Data was generated by https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/%E4%BD%BF%E7%94%A8ChatHaruhi54K%E7%94%9F%E6%88%90%E6%95%B0%E6%8D%AE.ipynb The data generated in speed around 8 instances per second under T4 GPU The code was partly implemented by [豆角](https://github.com/goodnessSZW) and finally verified by [LC1332](https://github.com/LC1332) The English data was filtered by https://github.com/LC1332/Haruhi-2-Dev/blob/main/notebook/Prepare_English_Training_set.ipynb ## Citation Cite our paper if you use this dataset in your publication ```bibtex @misc{li2023chatharuhi, title={ChatHaruhi: Reviving Anime Character in Reality via Large Language Model}, author={Cheng Li and Ziang Leng and Chenxi Yan and Junyi Shen and Hao Wang and Weishi MI and Yaying Fei and Xiaoyang Feng and Song Yan and HaoSheng Wang and Linkang Zhan and Yaokai Jia and Pingyu Wu and Haozhen Sun}, year={2023}, eprint={2308.09597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Nexusflow/Function_Call_Definitions
--- license: cc-by-nc-sa-4.0 dataset_info: - config_name: CVECPE features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 8237 num_examples: 2 download_size: 13384 dataset_size: 8237 - config_name: CVECPE_Multi (Nested) features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 17425 num_examples: 20 download_size: 15503 dataset_size: 17425 - config_name: Climate features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 2905 num_examples: 8 download_size: 4163 dataset_size: 2905 - config_name: OTX features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 7040 num_examples: 9 download_size: 8407 dataset_size: 7040 - config_name: Places features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 2460 num_examples: 7 download_size: 5759 dataset_size: 2460 - config_name: VT_Multi (Nested) features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 18137 num_examples: 29 download_size: 13810 dataset_size: 18137 - config_name: VT_Multi (Parallel) features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 18137 num_examples: 29 download_size: 13810 dataset_size: 18137 - config_name: VirusTotal features: - name: function_calls dtype: string - name: descriptions dtype: string splits: - name: train num_bytes: 11501 num_examples: 12 download_size: 11668 dataset_size: 11501 configs: - config_name: CVECPE data_files: - split: train path: CVECPE/train-* - config_name: CVECPE_Multi (Nested) data_files: - split: train path: CVECPE_Multi (Nested)/train-* - config_name: Climate data_files: - split: train path: Climate/train-* - config_name: OTX data_files: - split: train path: OTX/train-* - config_name: Places data_files: - split: train path: Places/train-* - config_name: VT_Multi (Nested) data_files: - split: train path: VT_Multi (Nested)/train-* - config_name: VT_Multi (Parallel) data_files: - split: train path: VT_Multi (Parallel)/train-* - config_name: VirusTotal data_files: - split: train path: VirusTotal/train-* --- # Dataset Card for "Nexusflow/Function_Call_Definitions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
prhegde/preference-data-math-stack-exchange
--- license: apache-2.0 --- The preference dataset is derived from the [stack exchange dataset](https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences) which contains questions and answers from the Stack Overflow Data Dump. This contains questions and answers for various topics. For this work, we used only question and answers from [math.stackexchange.com](https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences/tree/main/data/math.meta.stackexchange.com) sub-folder. The questions are grouped with answers that are assigned a score corresponding to the Anthropic paper: ``` score = log2 (1 + upvotes) rounded to the nearest integer, plus 1 if the answer was accepted by the questioner (we assign a score of −1 if the number of upvotes is negative). ``` We performed following processing to derive the final dataset. 1) Basic pre-processing ([code](https://github.com/PraveenSH/dpo-arithmo-mistral-7B/blob/main/src/data_processing/stack_exchange_data.py)) to clean the text 2) Filter Mathematical question using regex based detector ([code](https://github.com/PraveenSH/dpo-arithmo-mistral-7B/blob/main/src/data_processing/stack_exchange_data.py)) 3) For each question, extract 2 answers - one with highest score and one with the lowest score. Former is used as Preferred response and latter is used as the rejected response ## References ``` @online{h4stackexchange, author = {Lambert, Nathan and Tunstall, Lewis and Rajani, Nazneen and Thrush, Tristan}, title = {HuggingFace H4 Stack Exchange Preference Dataset}, year = 2023, url = {https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences}, } ```
Salesforce/summedits
--- license: cc-by-4.0 task_categories: - text-classification - summarization language: - en tags: - biology - finance - legal - medical pretty_name: SummEdits size_categories: - 1K<n<10K --- # Factual Consistency in Summarization Can you tell which edits of summaries are consistent, and which are inconsistent? <p align="center"> <img width="650" src="https://raw.githubusercontent.com/salesforce/factualNLG/master/images/summedits_examples.png"> </p> ## SummEdits Benchmark (Section 6-7) We release the 6,348 samples of data for the 10 domains in the SummEdits. Each sample has entries for: - `domain`: out of the 10 domains in SummEdits, - `id`: a unique ID for the sample, - `doc`: the input document, - `summary`: the summary that is either consistent or inconsistent with the facts in the document, - `label`: 1 if the summary is factually consistent, and 0 otherwise, - `seed_summary`: the (consistent) seed summary that was used as a starting point for the summary, - `edit_types`: for summaries that are inconsistent, corresponds to GPT4 classified type of error. For more detail on the data loading and benchmarking, we recommend you check out the Github repo: [https://github.com/salesforce/factualNLG](https://github.com/salesforce/factualNLG)
AntGroup-MI/Osprey-724K
--- license: cc-by-nc-4.0 task_categories: - conversational - text-generation - summarization - question-answering language: - en --- ### Osprey-724K Dataset Card Osprey-724K is an instruction dataset with mask-text pairs, containing around 724K GPT-generated multimodal dialogues to encourage MLLMs for fine-grained pixel-level image understanding. It contains object-level, part-level and additional instruction samples for robustness and flexibility. #### Dataset type: - Object-level: [osprey_conversation.json](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K/resolve/main/osprey_conversation.json?download=true), [osprey_detail_description.json](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K/resolve/main/osprey_detail_description.json?download=true) - Part-level: [osprey_part_level.json](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K/resolve/main/osprey_part_level.json?download=true) - Robustness&Flexibility: [osprey_lvis_positive_negative.json](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K/resolve/main/osprey_lvis_positive_negative.json?download=true), [osprey_short_form.json](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K/resolve/main/osprey_short_form.json?download=true) ### Paper and Code Paper: [https://arxiv.org/abs/2312.10032](https://arxiv.org/abs/2312.10032) \ Code: [https://github.com/CircleRadon/Osprey](https://github.com/CircleRadon/Osprey) ### License Attribution-NonCommercial 4.0 International \ It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use. ### Citations ``` @misc{Osprey, title={Osprey: Pixel Understanding with Visual Instruction Tuning}, author={Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang and Jianke Zhu}, year={2023}, eprint={2312.10032}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
5CD-AI/Vietnamese-LLaVA-Instruct-150K-gg-translated
--- task_categories: - visual-question-answering - question-answering language: - vi - en ---
CausalLM/GPT-4-Self-Instruct-Turkish
--- license: cc-by-4.0 language: - tr tags: - gpt4 --- **Sorry, it's no longer available on Hugging Face. Please reach out to those who have already downloaded it. If you have a copy, please refrain from re-uploading it to Hugging Face. The people here don't deserve it. See also: https://twitter.com/RealJosephus/status/1779913520529707387** As per [the community's request](https://huggingface.co/datasets/CausalLM/GPT-4-Self-Instruct-German/discussions/1), here we share a Turkish dataset synthesized using the OpenAI GPT-4 model with Self-Instruct, utilizing some excess Azure credits. Please feel free to use it. All questions and answers are newly generated by GPT-4, without specialized verification, only simple filtering and strict semantic similarity control have been applied. We hope that this will be helpful for fine-tuning open-source models for non-English languages, particularly Turkish. This dataset will be updated continuously.
Technoculture/synthetic-clinical-notes-embedded
--- language: - en license: mit size_categories: - 100K<n<1M task_categories: - question-answering - summarization pretty_name: Synthetic Clinical Notes tags: - starmpcc/Asclepius-Synthetic-Clinical-Notes - BAAI/bge-small-en-v1.5 - medical dataset_info: features: - name: output dtype: string - name: task dtype: string - name: instruction dtype: string - name: input dtype: string - name: input_embedding sequence: float32 - name: output_embedding sequence: float64 splits: - name: train num_bytes: 1199998956 num_examples: 158114 download_size: 967764780 dataset_size: 1199998956 configs: - config_name: default data_files: - split: train path: data/train-* --- # Synthetic Clinical Notes This dataset is post-processed version of [starmpcc/Asclepius-Synthetic-Clinical-Notes](https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes): - Turn into Alpaca format (`instruction`, `input`, and `output`) - Add embeddings for `input` and `output` columns using [BAAI/bge-small-en-v1.5](https://huggingface.co/datasets/BAAI/bge-small-en-v1.5) | | Details | | --------------------- | -------------------------------------------------- | | Sample Count | 158k | | Token Count | 648m | | Origin | https://figshare.com/authors/Zhengyun_Zhao/16480335| | Source of raw data | PubMed Central (PMC) and MIMIC 3 | | Processing details | [original](https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes), [paper](https://arxiv.org/pdf/2309.00237.pdf) <a target="_blank" href="https://colab.research.google.com/drive/12nk-nLo46P8GOVqpBIA2wDAYj5SnUGW5?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | | Embedding Model | [BAAI/bge-small-en-v1.5](https://huggingface.co/datasets/BAAI/bge-small-en-v1.5) | ## Data Diversity | index | Example Output | GPT-4 Rationale | GPT-4 Diversity Rating | |-------|----------------|-----------------|------------------------| | 137083| The coreferential expressions used to refer to the patient's severe bioprosthetic mitral valve stenosis and severe tricuspid regurgitation in the hospital course section of the discharge summary were "the patient had an irregular heartbeat with a diastolic murmur detected by auscultation" and "Transthoracic echocardiography revealed severe bioprosthetic mitral valve stenosis and severe tricuspid regurgitation." | Cardiology, Diagnostic Imaging, Physical Examination | 5 | | 113558| The coreference resolved in the hospital course section related to the patient's perforation in the sigmoid colon is that the perforation found in the colon was 3-cm long and located 5cm above the anastomosis. This led to a colon segmental resection with loop sigmoid colostomy and subsequent recovery with no complications. | Gastrointestinal Surgery, Perforation Location, Post-surgical Recovery | 5 | | 97204 | The prescribed biologic medications, Adalimumab and later Certolizumab, were used to treat the resurgence of the patient's tattoo manifestations after tapering of systemic glucocorticoids, but Adalimumab caused an injection site reaction, which prompted a change to Certolizumab. | Pharmacology, Medication Adjustment, Treatment Complications | 5 | | 53669 | In the hospital course of the discharge summary, coreferences for the patient's respiratory status are resolved using terms such as "her pulmonary clinical signs," "she presented no signs of septic shock," and "her clinical condition finally improved." Coreferences for the patient's treatment are resolved using phrases such as "she was given three doses of spiramycin," "antimicrobial therapy with ceftriaxone was initiated," and "triple antimicrobial therapy with piperacillin-tazobactam, spiramycin, and amikacin was introduced." | Respiratory Infection, Antimicrobial Therapy, Clinical Improvement | 5 | | 39865 | Using Named Entity Recognition in the discharge summary, the identified named entities related to Stickler syndrome are "Stickler syndrome" and "beaded vitreous phenotype." The identified named entities related to diagnostic testing are "Multiplex Ligation-dependent Probe Amplification (MLPA)" and "exons 41 and 42 [c.3025-3168, p.Gly1009-Val1056]." However, it should be noted that the discharge summary does not provide a comprehensive list of all named entities related to Stickler syndrome and diagnostic testing, and further review of the patient's medical records may be necessary for a complete analysis. | Genetic Testing, Stickler Syndrome, Diagnostic Specificity | 5 | | 85187 | The patient was diagnosed with metastatic Leydig cell tumour of the spine and underwent surgery through a right subscapular 3rd rib thoracotomy followed by postoperative radiotherapy with radical intent. The patient is advised to follow up regularly as per oncologist's advice and to come back immediately in case of any medical emergency. No discharge medications were given as per the discharge summary. | Oncology, Surgical Approach, Radiotherapy | 5 | | 99107 | The patient had a complicated problem with their heart's aortic valve and the wall dividing the two chambers of their heart. The valve became detached and the wall had growths on it, likely from an infection. Despite treatment, the patient's condition worsened and they were made comfortable with symptom control and palliative care before passing away. | Cardiac Condition, Palliative Care, End-of-Life | 5 | | 65981 | The diagnosis for the 10-year-old female patient was a non-displaced scaphoid fracture, and the diagnostic studies used were a dual-energy computed tomography (DECT) scan which showed bone marrow edema (BME) in the scaphoid bone on VNCa images and a confirmatory magnetic resonance imaging (MRI). | Pediatric Orthopedics, Advanced Imaging, Fracture Diagnosis | 5 | | 68814 | The expanded forms of the abbreviations in the hospital course section are: transnasal endoscopic excision (removal of pituitary adenoma using an endoscope through the nasal cavity) and MRN (medical record number). The diagnosis section abbreviations do not need expansion as they are already spelled out (pituitary adenoma). | Endoscopic Surgery, Pituitary Adenoma, Abbreviation Clarification | 5 | | 16059 | Based on the given discharge summary, the named entities related to Patient 1's diagnosis of influenza B that can be identified are the diagnosis itself and the prescribed medication, oseltamivir. However, there is no mention of the patient's prior immunization history or any recommendations for future vaccination. Therefore, we cannot fully respond to the healthcare professional's instruction regarding receiving the influenza vaccination to prevent future infections. | Infectious Disease, Influenza B Treatment, Pharmacological Management | 5 | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63486df1f8f01fcc4b23e97d/LDIbciLi_PYK3Wys-yGcB.png) ## Data Lineage ```text Technoculture/Synthetic-Clinical-Notes ↳ starmpcc/Asclepius-Synthetic-Clinical-Notes ↳ zhengyun21/PMC-Patients [code](https://github.com/zhao-zy15/PMC-Patients) ↳ PubMed Central (PMC) ``` --- > prompt for GPT-4 based annotation on diversity > ```text > | index | Example Output | > |--------|---------------| > | 137083 | The coreferential expressions used to refer to the patient's severe bioprosthetic mitral valve stenosis and severe tricuspid regurgitation in the hospital course section of the discharge summary were "the patient had an irregular heartbeat with a diastolic murmur detected by auscultation" and "Transthoracic echocardiography revealed severe bioprosthetic mitral valve stenosis and severe tricuspid regurgitation." | > > for each row, add 2 columns. > > Column 3 named 'GPT-4 Rationale': Rationale for how it is is similar or/and diverse with respect to all the other examples in the table. > Column 4 named 'GPT-4 Diversity Rating': mark for how diverse the example is from all the other examples in the table. > > Rating System: > 0-1: Not Diverse - Almost identical to another example in the table > 2-3: Very Similar - A somewhat similar example exists in the table > 4: Fairly Diverse - A fairly dissimilar example from any other example in the table > 5: Very Diverse - Completely dissimilar to any other example in the table > > Return escaped markdown so it can be copied pasted as is. > ```
OEvortex/uncensored-vortex
--- license: other license_name: hsul license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md size_categories: - 1M<n<10M task_categories: - text-generation tags: - alpaca - uncensored dataset_info: features: - name: output dtype: string - name: instruction dtype: string - name: input dtype: string splits: - name: train num_bytes: 7019920163 num_examples: 8549987 download_size: 4289084979 dataset_size: 7019920163 configs: - config_name: default data_files: - split: train path: data/train-* ---
Ziyuan111/DurhamTrees
--- license: apache-2.0 language: - en size_categories: - 1M<n<10M task_categories: - token-classification - table-question-answering --- # Durham Urban Canopy Analysis and Enhancement Initiative (DUCAEI) The `Class` is a custom dataset class that brings together information from two distinct domains into a unified dataset. This class is designed to streamline the process of working with data from different sources and enable users to seamlessly access and analyze combined datasets. ## Project Overview ![Dataset Preview](https://github.com/AuraMa111/Urban_Tree_Canopy_in_Durham/blob/main/Picture1.png?raw=true) (I also upload a seperate analysis .py file to show some visualization) The Durham Urban Canopy Analysis and Enhancement Initiative (DUCAEI) is committed to utilizing the Trees & Planting Sites dataset for a comprehensive geospatial analysis of Durham's urban tree canopy. Through Python within Google Colab, our aim is to identify key locations for canopy expansion, evaluate the impact of urban development on green spaces, and deliver informed recommendations for the sustainable growth of urban tree coverage. ## Background and Rationale Durham's urban tree canopy is a crucial component that contributes to environmental quality, public health, and overall city aesthetics. This canopy is under threat due to ongoing urban development and natural wear. A systematic, data-driven approach is critical for strategic planning and conservation of the urban forest to ensure its vitality for generations to come. ## Data Sources and Methodology These data files are from durham open. And for the .py file: The provided Python script defines a dataset class named `DurhamTrees` using the `datasets` library. This class combines information from two different domains ("class1_domain1" and "class2_domain1") and includes features from both domains. Trees & Planting Sites Dataset: Hosted on the Durham Open Data portal, this dataset includes location, species, size, and health of street trees, alongside designated future planting sites. Data Source: Durham Trees & Planting Sites Dataset https://live-durhamnc.opendata.arcgis.com/datasets/DurhamNC::trees-planting-sites/about Key components of the script: 1. **Imported Libraries:** - `datasets`: for building the dataset. - `pandas`: for handling data in tabular form. - `geopandas`: for working with geospatial data. - Other standard libraries for various functionalities. 2. **URL Definitions:** - Specifies URLs for CSV and GeoJSON files from two different domains. 3. **DurhamTrees Class:** - Inherits from `datasets.GeneratorBasedBuilder`. - Defines configurations for two classes ("class1_domain1" and "class2_domain1"). - Specifies features for the combined dataset, including features from both classes. 4. **Info Method:** - Describes the combined dataset's features, supervised keys, homepage, and citation. 5. **Split Generators Method:** - Downloads and extracts data from the provided URLs. - Defines split generators for training data. 6. **Generate Examples Methods:** - `_generate_examples`: Calls methods to generate examples for both classes. - `_generate_examples_from_class1`: Reads CSV and GeoJSON data, merges them, and yields examples. - `_generate_examples_from_class2`: Reads CSV data and yields examples. 7. **Column Extraction:** - Defines columns to extract for both classes, indicating which features to include in the final dataset. 8. **Example Yielding:** - Iterates over rows of the final dataframes, converting each row to a dictionary and yielding examples with unique identifiers. The script is intended for creating a combined dataset from two different sources, and it uses the `datasets` library to facilitate data handling. ### Data Sources We will leverage the following files from the Durham Trees & Planting Sites Dataset, as found on the Durham Open Data portal: - `merge.csv` - `Trees_&_Planting_Sites.csv` - `Trees_%26_Planting_Sites.geojson` # Dataset Card for Urban Tree Inventory ## Dataset Description This dataset provides comprehensive information about urban trees within a specified area, including their physical characteristics, environmental benefits, and the economic value they add in terms of ecosystem services. ### Spatial Data (GeoJSON) **Format:** GeoJSON **Content:** - **Type:** `FeatureCollection` - A collection of feature objects. - **Features:** Each feature object represents a tree and contains: - **Type:** `Feature` - **Geometry:** `Point` (includes longitude and latitude of the tree location). - **Properties:** Detailed information about the tree (some fields may overlap with the CSV structure below). **IMAGE DAYA** dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': aechmea_fasciata '1': agave_americana '2': agave_attenuata '3': agave_tequilana '4': aglaonema_commutatum '5': albuca_spiralis '6': allium_cepa '7': allium_sativum ### Tabular Data (CSV) **Format:** CSV **Columns:** - **X, Y:** Coordinates of the tree location. - **OBJECTID:** Unique identifier for the tree. - **streetaddress:** Street address nearest to the tree. - **city:** City where the tree is located. - **zipcode:** Zip code for the location of the tree. - **facilityid:** Identifier for the facility associated with the tree, if any. - **present:** Indication of whether the tree is currently present. - **genus, species, commonname:** Botanical and common names of the tree. - **plantingdate:** Date when the tree was planted. - **diameterin:** Diameter of the tree trunk in inches. - **heightft:** Height of the tree in feet. - **condition:** Health condition of the tree. - **contractwork:** Indicates if the tree has had any contract work done. - **neighborhood:** Neighborhood where the tree is located. - **program:** The program under which the tree was planted. - **plantingw:** Width of the planting site. - **plantingcond:** Condition of the planting site. - **underpwerlins:** Whether the tree is under power lines. - **matureheight:** The mature height of the tree. - **GlobalID:** A global unique identifier for the tree. - **created_user:** The user who created the record. - **created_date:** The date the record was created. - **last_edited_user:** The user who last edited the record. - **last_edited_date:** The date the record was last edited. #### Environmental and Economic Data: - **isoprene, monoterpene, vocs:** Emissions and absorption data for various compounds. - **coremoved_ozperyr, o3removed_ozperyr, etc.:** Annual pollutant removal metrics. - **o2production_lbperyr:** Annual oxygen production. - **carbonstorage_lb, carbonstorage_dol:** Carbon storage metrics. - **grosscarseq_lbperyr, grosscarseq_dolperyr:** Gross carbon sequestration. - **avoidrunoff_ft2peryr, avoidrunoff_dol2peryr:** Metrics related to stormwater runoff avoidance. - **totannbenefits_dolperyr:** Total annual dollar benefits from the tree. - **leafarea_sqft, potevapotran_cuftperyr, etc.:** Metrics related to the water cycle. - **heating_mbtuperyr, cooling_kwhperyr, etc.:** Energy savings related to the tree's impact on building energy use. ### Example Record **GeoJSON Feature:** ```json { "crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } }, "features": [ { "type": "Feature", "properties": { "OBJECTID": 2840940, "streetaddress": "411 N GREGSON ST", "city": "DURHAM", "zipcode": 27701, "facilityid": 2936423, "present": "Planting Site", "genus": null, "species": "", "commonname": null, "plantingdate": null, "diameterin": 0.0, "heightft": null, "condition": null, "contractwork": null, "neighborhood": "Walltown", "program": null, "plantingw": "Greater than 5 ft", "plantingcond": "Fair", "underpwerlins": "No", "matureheight": "Large (over 60 feet tall)", "GlobalID": "{8BA6662A-8777-473A-82BB-FD77FE6813BB}", "created_user": "A1", "created_date": "2024-02-03T10:17:12Z", "last_edited_user": "A1", "last_edited_date": "2024-02-03T10:17:12Z", "isoprene": null, "monoterpene": null, "vocs": null, "coremoved_ozperyr": null, "coremoved_dolperyr": null, "o3removed_ozperyr": null, "o3removed_dolperyr": null, "no2removed_ozperyr": null, "no2removed_dolperyr": null, "so2removed_ozperyr": null, "so2removed_dolperyr": null, "pm10removed_ozperyr": null, "pm10removed_dolperyr": null, "pm25removed_ozperyr": null, "o2production_lbperyr": null, "replacevalue_dol": null, "carbonstorage_lb": null, "carbonstorage_dol": null, "grosscarseq_lbperyr": null, "grosscarseq_dolperyr": null, "avoidrunoff_ft2peryr": null, "avoidrunoff_dol2peryr": null, "polremoved_ozperyr": null, "polremoved_dolperyr": null, "totannbenefits_dolperyr": null, "leafarea_sqft": null, "potevapotran_cuftperyr": null, "evaporation_cuftperyr": null, "transpiration_cuftperyr": null, "h2ointercept_cuftperyr": null, "avoidrunval_cuftperyr": null, "avoidrunval_dol2peryr": null, "carbonavoid_lbperyr": null, "carbonavoid_dolperyr": null, "heating_mbtuperyr": null, "heating_dolperyrmbtu": null, "heating_kwhperyr": null, "heating_dolperyrmwh": null, "cooling_kwhperyr": null, "cooling_dolperyr": null, "totalenerg_dolperyr": null }, "geometry": { "type": "Point", "coordinates": [-78.908630289999962, 36.00441249000005, 0.0] } } ] } ``` The `Trees_&_Planting_Sites.csv` file encompasses a range of attributes for each record: - **OBJECTID:** Unique identifier for each record. - **streetaddr:** Street address where the tree or planting site is located. - **city:** The city name, which is Durham. - **zipcode:** Postal code for the location. - **facilityid:** Identifier possibly linked to a facility or area associated with the tree. - **present:** Type of feature present, such as a tree or a planting site. - **genus:** Genus of the tree. - **species:** Species of the tree. - **commonname:** Common name of the tree. - **plantingda:** Date or year range when the tree was planted or the planting site was established. - ... ### Objectives 1. Combine Shapefile and CSV data into a comprehensive geospatial dataset using Python. 2. Apply Python libraries to uncover relationships between tree canopy data and urban development. 3. Provide practical insights and strategies for the expansion of Durham's urban tree canopy. 4. Produce analyses and visualizations with the GeoJSON file. ### Methodology Our analytical process within Google Colab will encompass: - **Data Preparation and Integration:** Using tools like Geopandas, Pandas, and PyShp to organize and combine spatial and tabular data. - **Geospatial Analysis:** Applying Shapely and Rtree for spatial analysis, and using SciPy or Statsmodels for statistical correlations. - **Visualization and Optimization:** Generating maps and graphs with Matplotlib, Seaborn, or Plotly, and utilizing optimization algorithms to suggest optimal planting locations. ## Deliverables 1. A collection of Google Colab Python notebooks that outline our analytical processes. 2. Interactive maps and visualizations that connect tree canopy coverage with urban development metrics. 3. An exhaustive report that contains our findings and recommendations for enhancing the urban canopy. ## Limitations - **Computational Resources:** The limited computational offerings of Google Colab may pose a challenge to the size of the datasets or the complexity of models we can employ. - **Data Quality:** The accuracy and currency of the data ultimately affect the precision of our recommendations. - **Sociopolitical Considerations:** Implementation of our data-driven suggestions must be reviewed within the context of local policy and community input. ## Conclusion DUCAEI aims to create a more verdant and livable urban landscape in Durham through this Python-based analytical project. By laying a strong foundation for data-informed decision-making, we hope to cultivate a thriving, green, and sustainable urban environment.
ajibawa-2023/OpenHermes-2.5-Code-290k
--- license: apache-2.0 language: - en tags: - synthetic - code size_categories: - 1M<n<10M --- **OpenHermes-2.5-Code-290k** This dataset is amalgamation of two datasets. I have used [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) a super quality dataset made avaliable by [teknium](https://huggingface.co/teknium). Other datset is my own [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). This dataset is in Vicuna/ShareGPT format. There are around **1.29 million** set of conversations. I have cleaned the dataset provided by Teknium and removed metadata such as "source" & "category" etc. This dataset has primarily synthetically generated instruction and chat samples. This dataset is very useful for various usecases.
declare-lab/CategoricalHarmfulQA
--- configs: - config_name: default data_files: - split: en path: - data/catqa_english.json - split: zh path: - data/catqa_chinese.json - split: vi path: - data/catqa_vietnamese.json license: apache-2.0 language: - en - zh - vi size_categories: - 1K<n<10K --- --- license: apache-2.0 --- ## CatQA: A categorical harmful questions dataset - CatQA is used in LLM safety realignment research: __Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic__ ([Paper](https://arxiv.org/abs/2402.11746#:~:text=Safety%20Re%2DAlignment%20of%20Fine%2Dtuned%20Language%20Models%20through%20Task%20Arithmetic,-Rishabh%20Bhardwaj%2C%20Do&text=Aligned%20language%20models%20face%20a,that%20performs%20LLM%20safety%20realignment.), [Code](https://github.com/declare-lab/resta/tree/main)) ## How to download ``` from datasets import load_dataset dataset = load_dataset("declare-lab/CategoricalHarmfulQA") ``` ## What is CatQA? To comprehensively evaluate the model across a wide range of harmful categories, we construct a new safety evaluation dataset CatQA. We obtain a combined list of prohibited use cases mentioned in OpenAI usage policies and Meta’s Llama2 acceptable use policy. As shown in the table below, there are 11 main categories of harm we consider. We divide each of these categories into 5 sub-categories. For each sub-category, we frame 10 harmful questions. Each sub-category and question is collected by us with the assistance of a powerful but unaligned LLM (the LLM is helpful but not safety-aligned. We do not disclose the name of LLM to prevent its misuse}. Thus, the dataset consists of 550 harmful questions, 55 such questions are shown in the table. While CatQA questions are in English, to evaluate the generalizability of our approach, we extend the dataset to two more languages: Chinese and Vietnamese. First, we translated the English version to the corresponding language using the LLM which assisted us in generating the former version, and employed language-proficient human annotators to enhance the quality of the dataset. <img width="897" alt="catqa" src="https://github.com/declare-lab/resta/assets/32847115/6f5f1ba5-1b35-4181-bb05-0995ed1982fc"> ## Citation ```bibtex @misc{bhardwaj2024language, title={Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic}, author={Rishabh Bhardwaj and Do Duc Anh and Soujanya Poria}, year={2024}, eprint={2402.11746}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jdpressman/retro-text-style-transfer-v0.1
--- license: cc0-1.0 language: - en tags: - synthetic size_categories: - 10K<n<100K --- # Retro Textual Style Transfer v0.1 This component of RetroInstruct implements textual style transfer by providing a dataset of * language model instruction prompts * that take an example style passage along with a task text * and rewrite the task text to sound like the style passage It is made by starting with ground truth public domain text from [the pg19 dataset](https://huggingface.co/datasets/pg19) and then writing task passages to "transfer from" with [Mixtral Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). It is similar in spirit to the "instruction backtranslation" featured in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/abs/2308.06259) by Li et al. However instead of generating the whole instruction with the text prior we take a desired task, break it down into parts, and then generate sub-corpus for each part to be assembled into training data using templates. This allows us to create a large amount of diverse data for a specific task by "indexing" a known-correct answer key with generated questions. The code used to make Retro Textual Style Transfer v0.1 can be found [in this GitHub repository](https://github.com/JD-P/RetroInstruct/). Here are a few truncated examples from the dataset: ``` Please adapt my words to the style of 'From North Carolina to Southern California Without a Ticket:' <STYLE> one time or another and terrorized by them. I recalled the words of the engine coupler at Mobile. When I parted with him, his last remark was, "Look out for the Hoodlums." They are a set of young city bloods and toughs of the worst stripe, banded togeth... </STYLE> <TASK> I went west to cure my asthma, and I had letters saying I was a good worker. I picked the two best ones—from John Shackelford and Frank Powell—to show the guard. The guard was puzzled, but he let me go. My face and hands were dirty from walking, and I wond... </TASK> g West to cure the asthma, and that I had letters of recommendation. I had several other letters of this kind in my pocket, but remembering that home reference is said to be the best, I selected only two from the bunch--those of Mr. John Shackelford and M... ``` ``` 🌟 Please rephrase the task text in a style similar to the one below 🌟 STYLE: START ave none;[13] also "totem clans," where there are none, for the totemically named associations of the Arunta are not "clans," in the normal and usual sense of that word; they are not kins but associations. Mr. Goldenweizer, in his first category, speaks o... STYLE: END TASK: START We-"We don't have to agree on everything. That's uh, a luxury we don't have. Uh, Goldenweiser says that totemic systems cannot be defined by their shared characteristics. He instead favors defining totemism by the presence of any three features of six list... TASK: END ere in agreement. 4. _Totemic taboos_. These, though extremely general, are not quite universal even in Australia. 5. _A belief in descent from the totem_. This belief is post-totemic, being merely one of many aetiological myths by which men explain to ... ``` ## Usage ### THIS ISN'T DONE YET Before using this in a training run you should be aware it's more of an alpha release. A lot of the data is known-flawed by e.g. being too short or suffering from garbage-in garbage-out (as it turns out Project Gutenberg ebooks contain a lot of weird stuff). The next step for me is going to be tuning models on this to see how learnable it is and adding a reject-and-redo system with [the MiniHF weave evaluator](https://github.com/JD-P/minihf) that catches weird stuff and known failure modes as I go through the dataset to find the categories of flaws. To repeat that more clearly: This is a dataset I haven't trained on and haven't cleaned, so expect problems unless you do that yourself. I'm mostly putting this up so my testing pipeline uses the same libraries and code that end users will use, so I can publish it to help them out once the dataset is fully ready. ### Use Cases - The primary use case for this dataset is for text authorship pipelines where you need to implement a rewrite pass that makes different pieces of LLM text sound the same. I found when I implemented an essay authorship pipeline based on [WhisperX](https://github.com/m-bain/whisperX) transcripts that it was hard to get it to write like me. - One of the indexing tasks, "Rewrite this passage from {title} as though it were a transcribed interview with occasional verbal tics and hiccups." should allow models trained on this dataset to take verbal transcripts from e.g. Whisper and turn them into prose. - In theory, though I would not rely on it without more careful study, a style transfer pass should provide some resistance to [stylometry based deanonymization attacks](https://boingboing.net/2011/12/29/state-of-adversarial-stylometr.html). It would be interesting to see research along these lines. ### Quickstart ``` import datasets style_transfer = datasets.load_dataset("jdpressman/retro-text-style-transfer-v0.1") for row in style_transfer["train"]: print(row["prompt_open"], "\n\n", row["start_style"], "\n", row["style_passage"], "\n", row["end_style"], "\n\n", row["start_task"], "\n", row["task_passage"], "\n", row["end_task"], "\n\n", row["ground_truth"]) ``` ### License This dataset derives from public domain data and I release my improvements into the public domain as well with the [Creative Commons Zero Public Domain Declaration](https://creativecommons.org/publicdomain/zero/1.0/). Technically pg19 lists its license as "apache2", but the only changes made to the Gutenberg text were censoring certain slurs with an automatic program and removing the headers, which does not qualify them as separate copyrighted works. ## Data Structure ### Row Contents Each row in the dataset consists of nine columns. 0. **title_author** - The index column, taken from pg19 so it is easier to associate a row with its original text in the dataset. 1. **prompt_open** - The initial instruction given to the language model. See **Conditional vs. Unconditional Prompts** below for more information. 2. **start_style** - The start marker for the style passage. 3. **style_passage** - The passage the model is meant to transfer style from. These are random excerpts taken from the same book as the ground truth of roughly the same length. 4. **end_style** - The end marker for the style passage. 5. **start_task** - The start marker for the task text. 6. **task_passage** - The passage onto which the model is meant to transfer the style, which is to say the passage the model is expected to rewrite according to the style given in the previous passage. 7. **end_task** - The end marker for the task text. 8. **ground_truth** - The ground truth answer meant to teach the model the transform that turns its generated task passage into the original known-good text. ### Conditional vs. Unconditional Prompts The `prompt_open` column consists of two sub-corpus generated with few shot prompting. Roughly one half of the prompts are **conditional** which means they include the name of the book and sometimes its author when asking for the style transfer. The other half are **unconditional** because they only provide a style passage with no further authorship or bibliographic information. The conditional prompts tend to be written in a fairly similar professional English style, while the unconditional prompts are generated by morphing a small number of seed prompts according to a set of latent variables, these variables are: * **Conscientiousness** - The extent to which someone is efficient, careful, and organized. This is probably going to influence things like prompt length, precision of language, how thoroughly the task is described, etc. * **Agreeableness** - How friendly, compassionate, and easily swayed someone is by others. This probably influences amount of visible positive affect and politeness. * **Openness** - How inventive, explorative, and comfortable with the unfamiliar someone is. This factor probably manifests as a tendency to try weird prompt styles that break the mould, or write prompts that are more open ended/place fewer expectations on what the language model will do. * **Skill** - Skill manifests in prompts as grammar and vocabularity complexity, writing skill, prompting skill and overall situational awareness/understanding of the nuances of the system they are using. In the context of this prompt it probably means the user will be less likely to use terms like 'task text' and more likely to give confusing or contradictory instructions. Because style in text is usually subtle, asking the model to generate prompts this way can lead to some fairly goofy emoji-laden text. This text was accepted in the hopes that it improves generalization to show the model overly exaggerated affective examples. An example of a conditional and unconditional prompt: **Conditional** - Please adapt my words to the style of 'From North Carolina to Southern California Without a Ticket:' **Unconditional** - 🌟 Please rephrase the task text in a style similar to the one below 🌟 ### Start and End Markers To hopefully help improve generalization, 20 styles of start and end marker were few shot prompted with Mixtral and rolled from in a table during row generation. A minority of the time the markers are mismatched during row generation to help the distribution mimic the kinds of chaotic inconsistencies found in real users prompts. Because the dataset is distributed in rows you have the opportunity to further increase variance by e.g. occasionally omitting one of the markers entirely, or substituting your own more expansive set. #### List of Start And End Markers These were the start and end markers for style passages used during row generation. The ones for task text are similar but replace words like 'style' with 'task'. ``` style_mark_pool = [{"start":"==START STYLE PASSAGE==","end":"==END STYLE PASSAGE=="}, {"start":"[BEGIN STYLE]","end":"[END STYLE]"}, {"start":"<STYLE>","end":"</STYLE>"}, {"start":"<BEGIN STYLE>","end":"<END STYLE>"}, {"start":"{{STYLE:START}}","end":"{{STYLE:END}}"}, {"start":"BEGIN STYLE]","end":"[END STYLE"}, {"start":"*STYLE START*","end":"*STYLE END*"}, {"start":"BEGIN STYLE TEXT","end":"CONCLUDE STYLE TEXT"}, {"start":"STYLE: START","end":"STYLE: END"}, {"start":"STYLE:","end":"END STYLE"}, {"start":"STYLE_START","end":"STYLE_END"}, {"start":"--START--","end":"--END--"}, {"start":"***START***","end":"***END***"}, {"start":"[STYLE:START]","end":"[STYLE:END]"}, {"start":"!BEGIN STYLE!","end":"!END STYLE!"}, {"start":"EXAMPLE PASSAGE","end":"END EXAMPLE"}, {"start":"EXAMPLE TEXT STYLE","end":"END EXAMPLE TEXT STYLE"}, {"start":"EXAMPLE_START","end":"EXAMPLE_END"}, {"start":"THE FOLLOWING PASSAGE","end":"END OF THE PREVIOUS PASSAGE"}, {"start":"BEGIN TARGET PASSAGE","end":"END TARGET PASSAGE"}] ``` ## Biases and Limitations Just because Mixtral rewrote the text in these books to have a more modern style doesn't mean the underlying ideas have changed. This is derived from a collection of 100+ year old books, many of which contain offensive, outdated, obsolete, etc ideas. The underlying pg19 dataset claims to have filtered out a list of slurs and otherwise left the text untouched. One potential consequence of training on it is that your model will be more likely to bring up ideas normally relegated to the "old, 19th century, historical" part of its latent space while writing modern text. Because the style transfer is always into the "ground truth" text, which is quite old, a model trained on this dataset will probably be a lot better at translating modern text into archaic styles than the reverse order. This can probably be partially mitigated by occasionally reversing the order of the index and ground truth during training. The hope is that because the text is diverse that the model will generalize to being able to do at least some modern styles. In the future I would like to publish a supplement module with modern ground truth text to help alleviate this. ## Planned Improvements - Decent [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) LoRa trained on this task - Associated performance numbers for various models tuned on this dataset - Better prompting during the indexing stage - Better filtering of garbage data during indexing stage - Modern text supplement likely published as separate component/module
ParasiticRogue/Bluemoon-Light
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - not-for-all-audiences --- Newly trimmed, cleaned, and grammar checked, all of which is seperate from previous versions of Bluemoon, and has been edited for better parquet quantization such as exl2 or gguf, making models slightly more stable during creative endeavors when used. Each row is under 2048 tokens in length (832 rows total) with extra stop tokens on each side to help with performance. If using a model that uses something other than Vicuna, ChatML, or the specialized Chat-Vicuna, then just take the csv file and edit the top section and/or stop tokens to your prefered template and convert it yourself. The auto parquet-converter's file might be missing some rows or are unfinished/unpruned, so use the ones included in the main files proper instead. Original template dataset below: https://huggingface.co/datasets/tastypear/bluemoon-cleaned-lewd?not-for-all-audiences=true
2A2I/argilla-dpo-mix-7k-arabic
--- configs: - config_name: default data_files: - split: test path: data/test-* - split: train path: data/train-* dataset_info: features: - name: dataset dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: chosen_rating dtype: float64 - name: rejected_rating dtype: float64 splits: - name: test num_bytes: 6991078 num_examples: 750 - name: train num_bytes: 62886912 num_examples: 6750 download_size: 30613280 dataset_size: 69877990 license: mit language: - ar tags: - synthetic - dpo - distilabel size_categories: - 1K<n<10K --- # Dataset Card for "argilla-dpo-mix-7k-arabic" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
allganize/en-fpb-ko
--- dataset_info: features: - name: conversation_id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: test num_bytes: 333061 num_examples: 944 download_size: 109275 dataset_size: 333061 configs: - config_name: default data_files: - split: test path: data/test-* license: mit language: - ko --- # en-fpb-ko ### 데이터 설명 - `en-fpb-ko` 데이터는 금융 뉴스로부터의 문장을 '긍정', '중립', '부정' 중 하나로 분류하는 감성 분류 (sentiment analysis) 데이터셋입니다. 입력값으로는 text만 주어집니다. - 한국어 데이터를 생성하기 위해, 먼저 사내 언어 번역 모델인 Allganize Translator를 활용하여 [ChanceFocus/en-fpb](https://huggingface.co/datasets/ChanceFocus/en-fpb)의 test set을 한국어로 번역하였습니다. 오역된 데이터를 직접 제거하였고, 그 결과 944개의 평가 데이터가 생성되었습니다. ### 데이터 출처 - [ChanceFocus/en-fpb](https://huggingface.co/datasets/ChanceFocus/en-fpb) ### 데이터 예시 ``` { 'conversation_id': 'fpb3876', 'conversations': array([ { 'from': 'human', 'value': '''금융 뉴스 기사에서 발췌한 이 문장의 정서를 분석하세요. 부정, 긍정 또는 중립 중 하나로 답을 제시하세요. 텍스트: 회사 간의 오랜 협력 관계를 확대하는 새로운 계약에는 Larox에서 Etteplan으로 특정 엔지니어링 및 문서 기능의 이전이 포함됩니다. 정답:''' }, { 'from': 'gpt', 'value': '긍정' } ], dtype=object) } ```
AdaptLLM/ConvFinQA
--- configs: - config_name: ConvFinQA data_files: - split: train path: train_turn.json - split: validation path: dev_turn.json task_categories: - text-classification - question-answering - zero-shot-classification language: - en tags: - finance --- # Domain Adaptation of Large Language Models This repo contains the **ConvFinQA dataset** used in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets * 2024/1/16: 🎉 Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!🎉 * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B. * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B. * 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) ## Domain-Specific Tasks ### Pre-templatized/Formatted Testing Splits To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ### Raw Datasets We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: - [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt) - [RCT](https://huggingface.co/datasets/AdaptLLM/RCT) - [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) - [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA) - [Headline](https://huggingface.co/datasets/AdaptLLM/Headline) - [NER](https://huggingface.co/datasets/AdaptLLM/NER) - [FPB](https://huggingface.co/datasets/AdaptLLM/FPB) The other datasets used in our paper have already been available in huggingface, and you can directly load them with the following code: ```python from datasets import load_dataset # MQP: dataset = load_dataset('medical_questions_pairs') # PubmedQA: dataset = load_dataset('bigbio/pubmed_qa') # USMLE: dataset=load_dataset('GBaker/MedQA-USMLE-4-options') # SCOTUS dataset = load_dataset("lex_glue", 'scotus') # CaseHOLD dataset = load_dataset("lex_glue", 'case_hold') # UNFAIR-ToS dataset = load_dataset("lex_glue", 'unfair_tos') ``` ## Citation If you find our work helpful, please cite us: ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ``` and the original dataset: ```bibtex @inproceedings{ConvFinQA, author = {Zhiyu Chen and Shiyang Li and Charese Smiley and Zhiqiang Ma and Sameena Shah and William Yang Wang}, title = {ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering}, booktitle = {{EMNLP}}, pages = {6279--6292}, publisher = {Association for Computational Linguistics}, year = {2022} } ```
toloka/CrowdSpeech
--- annotations_creators: - found language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - summarization - automatic-speech-recognition - text2text-generation task_ids: [] paperswithcode_id: crowdspeech pretty_name: CrowdSpeech language_bcp47: - en-US tags: - conditional-text-generation - stuctured-to-text - speech-recognition --- # Dataset Card for CrowdSpeech ## Dataset Description - **Repository:** [GitHub](https://github.com/Toloka/CrowdSpeech) - **Paper:** [Paper](https://openreview.net/forum?id=3_hgF1NAXU7) - **Point of Contact:** research@toloka.ai ### Dataset Summary CrowdSpeech is the first publicly available large-scale dataset of crowdsourced audio transcriptions. The dataset was constructed by annotation [LibriSpeech](https://www.openslr.org/12) on [Toloka crowdsourcing platform](https://toloka.ai). CrowdSpeech consists of 22K instances having around 155K annotations obtained from crowd workers. ### Supported Tasks and Leaderboards Aggregation of crowd transcriptions. ### Languages English ## Dataset Structure ### Data Instances A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth. For each data instance, seven crowdsourced transcriptions are provided. ``` {'task': 'https://tlk.s3.yandex.net/annotation_tasks/librispeech/train-clean/0.mp3', 'transcriptions': "had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bough i don't mean you were not so before but you're at present on a different footing | had laid before her a pair of alternatives now of course you are completely your own mistress and are as free as the bird on the bowl i don't mean you were not so before but you were present on a different footing | had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bow i don't mean you are not so before but you're at present on a different footing | had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bow i don't mean you are not so before but you're at present on a different footing | laid before her a pair of alternativesnow of course you're completely your own mistress and are as free as the bird on the bow i don't mean you're not so before but you're at present on a different footing | had laid before her a peril alternatives now of course your completely your own mistress and as free as a bird as the back bowl i don't mean you were not so before but you are present on a different footing | a lady before her a pair of alternatives now of course you're completely your own mistress and rs free as the bird on the ball i don't need you or not so before but you're at present on a different footing", 'performers': '1154 | 3449 | 3097 | 461 | 3519 | 920 | 3660', 'gt': "had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bough i don't mean you were not so before but you're at present on a different footing"} ``` ### Data Fields * task: a string containing a url of the audio recording * transcriptions: a list of the crowdsourced transcriptions separated by '|' * performers: the corresponding performers' identifiers. * gt: ground truth transcription ### Data Splits There are five splits in the data: train, test, test.other, dev.clean and dev.other. Splits train, test and dev.clean correspond to *clean* part of LibriSpeech that contains audio recordings of higher quality with accents of the speaker being closer to the US English. Splits dev.other and test.other correspond to *other* part of LibriSpeech with the recordings more challenging for recognition. The audio recordings are gender-balanced. ## Dataset Creation ### Source Data [LibriSpeech](https://www.openslr.org/12) is a corpus of approximately 1000 hours of 16kHz read English speech. ### Annotations Annotation was done on [Toloka crowdsourcing platform](https://toloka.ai) with overlap of 7 (that is, each task was performed by 7 annotators). Only annotators who self-reported the knowledge of English had access to the annotation task. Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation). The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets. See more details in the [paper](https://arxiv.org/pdf/2107.01091.pdf). ### Citation Information ``` @inproceedings{CrowdSpeech, author = {Pavlichenko, Nikita and Stelmakh, Ivan and Ustalov, Dmitry}, title = {{CrowdSpeech and Vox~DIY: Benchmark Dataset for Crowdsourced Audio Transcription}}, year = {2021}, booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks}, eprint = {2107.01091}, eprinttype = {arxiv}, eprintclass = {cs.SD}, url = {https://openreview.net/forum?id=3_hgF1NAXU7}, language = {english}, pubstate = {forthcoming}, } ```
valurank/PoliticalBias
--- license: - other language: - en multilinguality: - monolingual task_categories: - classification task_ids: - classification --- # Dataset Card for PoliticalBias ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description roughly 8200 articles written by the website’s editors, each article covering one topic with 3 links that describe the same piece of news from different angles (usually one from the right, one from the left, and one from the center) ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of four columns namely Left, Right, Center, and Main URL ## Source Data The dataset is scrapped from http://allsides.com/
pensieves/mimicause
--- license: apache-2.0 pretty_name: MIMICause --- # Dataset Card for "MIMICause" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additinal-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) - **Paper:** [MIMICause: Representation and automatic extraction of causal relation types from clinical notes](https://arxiv.org/abs/2110.07090) - **Size of downloaded dataset files:** 333.4 KB - **Size of the generated dataset:** 491.2 KB - **Total amount of disk used:** 668.2 KB ### Dataset Summary MIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the mimicause.zip file from the **Community Annotations Downloads** section of the n2c2 dataset on the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) after signing their agreement forms, which is a quick and easy procedure. The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other. ### Supported Tasks Causal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences. ## Dataset Structure ### Data Instances An example of a data sample looks as follows: ``` { "E1": "Florinef", "E2": "fluid retention", "Text": "Treated with <e1>Florinef</e1> in the past, was d/c'd due to <e2>fluid retention</e2>.", "Label": 0 } ``` ### Data Fields The data fields are the same among all the splits. - `E1`: a `string` value. - `E2`: a `string` value. - `Text`: a `large_string` value. - `Label`: a `ClassLabel` categorical value. ### Data Splits The original dataset that gets downloaded from the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience. | name |train|validation|test| |---------|----:|---------:|---:| |mimicause| 1953| 489 | 272| ## Additional Information ### Citation Information ``` @inproceedings{khetan-etal-2022-mimicause, title={MIMICause: Representation and automatic extraction of causal relation types from clinical notes}, author={Vivek Khetan and Md Imbesat Hassan Rizvi and Jessica Huber and Paige Bartusiak and Bogdan Sacaleanu and Andrew Fano}, booktitle ={Findings of the Association for Computational Linguistics: ACL 2022}, month={may}, year={2022}, publisher={Association for Computational Linguistics}, address={Dublin, The Republic of Ireland}, url={}, doi={}, pages={}, } ```
shpotes/bosch-small-traffic-lights-dataset
--- license: other ---
hazal/Turkish-Biomedical-corpus-trM
--- language: - tr ---
copenlu/sufficient_facts
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - mit multilinguality: - monolingual pretty_name: sufficient_facts size_categories: - 1K<n<10K source_datasets: - extended|fever - extended|hover - extended|fever_gold_evidence task_categories: - text-classification task_ids: - fact-checking --- # Dataset Card for sufficient_facts ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/copenlu/sufficient_facts - **Repository:** https://github.com/copenlu/sufficient_facts - **Paper:** Will be uploaded soon... - **Leaderboard:** - **Point of Contact:** https://apepa.github.io/ ### Dataset Summary This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022. Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score. ### Languages English ## Dataset Structure The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC. Each file consists of json lines of the format: ```json { "claim": "Unison (Celine Dion album) was originally released by Atlantic Records.", "evidence": [ [ "Unison (Celine Dion album)", "The album was originally released on 2 April 1990 ." ] ], "label_before": "REFUTES", "label_after": "NOT ENOUGH", "agreement": "agree_ei", "type": "PP", "removed": ["by Columbia Records"], "text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ." } ``` ### Data Instances * FEVER: 600 consituent-level, 400 sentence-level; * HoVer - 600 consituent-level, 400 sentence-level; * VitaminC - 600 consituent-level. ### Data Fields * `claim` - the claim that is being verified * `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information * `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence * `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers * `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file. * `removed` - the text of the removed information from the evidence * `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags. ### Data Splits | name |test_fever|test_hover|test_vitaminc| |----------|-------:|-----:|-------:| |test| 1000| 1000| 600| Augmented from the test splits of the corresponding datasets. ### Annotations #### Annotation process The workers were provided with the following task description: For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>). You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br> <ul> <li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li> <li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li> <li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li> <!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>--> </ul> <b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b> The annotators were then given example instance annotations. Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task. The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators. #### Who are the annotators? The annotations were performed by workers at Amazon Mechanical Turk. ## Additional Information ### Licensing Information MIT ### Citation Information ``` @article{10.1162/tacl_a_00486, author = {Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle}, title = "{Fact Checking with Insufficient Evidence}", journal = {Transactions of the Association for Computational Linguistics}, volume = {10}, pages = {746-763}, year = {2022}, month = {07}, abstract = "{Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts1, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21\\% accuracy), whereas it is easiest for omitted date modifiers (63\\% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.}", issn = {2307-387X}, doi = {10.1162/tacl_a_00486}, url = {https://doi.org/10.1162/tacl\_a\_00486}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00486/2037141/tacl\_a\_00486.pdf}, } ``` ### Contributions Thanks to [@apepa](https://github.com/apepa) for adding this dataset.
strombergnlp/itu_faroese_danish
--- annotations_creators: - expert-generated language_creators: - found language: - da - fo license: - cc-by-4.0 multilinguality: - multilingual pretty_name: ITU Faroese Danish parallel text size_categories: - 1K<n<10K source_datasets: - original task_categories: - translation task_ids: [] --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** [https://arxiv.org/abs/2206.08727](https://arxiv.org/abs/2206.08727) - **Leaderboard:** - **Point of Contact:** [Leon Derczynski](https://github.com/leondz) ### Dataset Summary This is a native-speaker-generated parallel corpus of Faroese and Danish ### Supported Tasks and Leaderboards * ### Languages * Danish * Faroese ## Dataset Structure ### Data Instances 3995 parallel sentences ### Data Fields * `id`: the sentence pair ID, `string` * `origin`: the original sentence identifier text, `string` * `fo`: the Faroese text, `string` * `da`: the Danish text, `string` ### Data Splits Monolithic ## Dataset Creation ### Curation Rationale To gather a broad range of topics about the Faroes and the rest of the world, to enable a general-purpose Faroese:Danish translation system ### Source Data #### Initial Data Collection and Normalization * EUROparl Danish * Dimmaletting, Faroese newspaper * Tatoeba Danish / Faroese #### Who are the source language producers? ### Annotations #### Annotation process No annotations #### Who are the annotators? Two Faroese native speakers, one male one female, in their 20s, masters degrees, living in Denmark ### Personal and Sensitive Information None due to the sources used ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators This collection of Faroese is curated by Leon Derczynski ### Licensing Information Creative Commons Attribution 4.0 ### Citation Information ``` ```
mustapha/QuranExe
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - ar license: - mit multilinguality: - multilingual paperswithcode_id: null pretty_name: QuranExe size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask - sentence-similarity task_ids: - language-modeling - masked-language-modeling --- ## Dataset Description - **Size of downloaded dataset files:** 126 MB This dataset contains the exegeses/tafsirs (تفسير القرآن) of the holy Quran in arabic by 8 exegetes. This is a non Official dataset. It have been scrapped from the `Quran.com Api` This dataset contains `49888` records with `+14` Million words. `8` records per Quranic verse Usage Example : ```python from datasets import load_dataset tafsirs = load_dataset("mustapha/QuranExe") ```
readerbench/ro-fb-offense
--- annotations_creators: - expert-generated language_creators: - found language: - ro license: - apache-2.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection pretty_name: RO-FB-Offense extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language, hate speech).' tags: - hate-speech-detection --- # Dataset Card for "RO-FB-Offense" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense) - **Repository:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense) - **Paper:** FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments - **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory) ### Dataset Summary FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset. The following Classes are available: * OTHER: Non-Offensive Language * OFFENSIVE: - PROFANITY - INSULT - ABUSE ### Languages Romanian ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { 'sender': '$USER1208', 'no_reacts': 1, 'text': 'PLACEHOLDER TEXT', 'label': OTHER, } ``` ### Data Fields - `sender`: a `string` feature. - 'no_reacts': a `integer` - `text`: a `string`. - `label`: categorical `OTHER`, `PROFANITY`, `INSULT`, `ABUSE` ### Data Splits | name |train|test| |---------|----:|---:| |ro|x|x| ## Dataset Creation ### Curation Rationale Collecting data for abusive language classification for Romanian Language. ### Source Data Facebook comments #### Initial Data Collection and Normalization #### Who are the source language producers? Social media users ### Annotations #### Annotation process #### Who are the annotators? Native speakers ### Personal and Sensitive Information The data was public at the time of collection. No PII removal has been performed. ## Considerations for Using the Data ### Social Impact of Dataset The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information This data is available and distributed under Apache-2.0 license ### Citation Information ``` @inproceedings{busuioc2022fb-ro-offense, title={FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments}, author={ Busuioc, Gabriel-Razvan and Paraschiv, Andrei and Dascalu, Mihai}, booktitle={International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) 2022}, year={2022} } ``` ### Contributions
RUCAIBox/Chinese-Generation
--- language: - zh multilinguality: - monolingual task_categories: - summarization - text2text-generation - text-generation task_ids: [] --- This is the Chinese generation datasets collected by TextBox, including: - LCSTS (lcsts) - CSL (csl) - ADGEN (adgen). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
IDEA-CCNL/laion2B-multi-chinese-subset
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - zh license: - cc-by-4.0 multilinguality: - monolingual pretty_name: laion2B-multi-chinese-subset task_categories: - feature-extraction --- # laion2B-multi-chinese-subset - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) ## 简介 Brief Introduction 取自Laion2B多语言多模态数据集中的中文部分,一共143M个图文对。 A subset from Laion2B (a multimodal dataset), around 143M image-text pairs (only Chinese). ## 数据集信息 Dataset Information 大约一共143M个中文图文对。大约占用19GB空间(仅仅是url等文本信息,不包含图片)。 - Homepage: [laion-5b](https://laion.ai/blog/laion-5b/) - Huggingface: [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi) ## 下载 Download ```bash mkdir laion2b_chinese_release && cd laion2b_chinese_release for i in {00000..00012}; do wget https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-$i-of-00013.parquet; done cd .. ``` ## Lisence CC-BY-4.0 ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, }
skytnt/anime-segmentation
--- annotations_creators: [] language: [] language_creators: [] license: - cc0-1.0 multilinguality: [] pretty_name: Anime Segmentation size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - image-segmentation task_ids: - semantic-segmentation --- ## Dataset Description A segmentation dataset for anime character My project: [anime-segmentation](https://github.com/SkyTNT/anime-segmentation) ### Dataset Summary | Dir | Description | Format | Images | | ---- | ---- | ---- | ---- | | bg | background images | jpg | 8057 | | fg | foreground images, transparent background | png | 11802 | | imgs | real images with background and foreground| jpg | 1111 | | masks| labels for imgs | jpg | 1111 | Total size: 18GB ### Collection Method Collect background from [character_bg_seg_data](https://github.com/ShuhongChen/bizarre-pose-estimator#download) Collect foreground from danbooru website. Collect imgs and masks from [AniSeg](https://github.com/jerryli27/AniSeg#about-the-models) and danbooru website. I use [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) to restore the background images. I clean the dataset using [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) first then manually, to make sue all foreground is anime character. ### Contributions Thanks to [@SkyTNT](https://github.com/SkyTNT) for adding this dataset. Thanks to [@ShuhongChen](https://github.com/ShuhongChen) for [character_bg_seg_data](https://github.com/ShuhongChen/bizarre-pose-estimator#download) Thanks to [@jerryli27](https://github.com/jerryli27) for [AniSeg](https://github.com/jerryli27/AniSeg#about-the-models)
awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv
--- license: mit --- SNOMED-CT-Code-Value-Semantic-Set.csv
bigbio/scai_chemical
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: SCAI Chemical homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION --- # Dataset Card for SCAI Chemical ## Dataset Description - **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html - **Pubmed:** True - **Public:** True - **Tasks:** NER SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated to give an overview of the different chemical name classes found in MEDLINE text. ## Citation Information ``` @inproceedings{kolarik:lrec-ws08, author = {Kol{'a}{ r}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane}, title = {Chemical Names: {T}erminological Resources and Corpora Annotation}, booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining}, year = {2008}, } ```
zeroshot/arxiv-biology
--- annotations_creators: - no-annotation language_creators: - expert-generated language: - en license: - cc0-1.0 multilinguality: - monolingual --- ![;)](https://media.giphy.com/media/xd9HUXswWPY1EEJ80a/giphy.gif) ### Dataset Curators The original data is maintained by [ArXiv](https://arxiv.org/) ### Licensing Information The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/) ### Citation Information ``` @misc{clement2019arxiv, title={On the Use of ArXiv as a Dataset}, author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi}, year={2019}, eprint={1905.00075}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```
albertvillanova/visual-spatial-reasoning
--- annotations_creators: - crowdsourced language: - en language_creators: - machine-generated license: - apache-2.0 multilinguality: - monolingual pretty_name: Visual Spatial Reasoning size_categories: - 10K<n<100K source_datasets: - original tags: [] task_categories: - image-classification task_ids: [] --- # Dataset Card for Visual Spatial Reasoning ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ltl.mmll.cam.ac.uk/ - **Repository:** https://github.com/cambridgeltl/visual-spatial-reasoning - **Paper:** https://arxiv.org/abs/2205.00363 - **Leaderboard:** https://paperswithcode.com/sota/visual-reasoning-on-vsr - **Point of Contact:** https://ltl.mmll.cam.ac.uk/ ### Dataset Summary The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False). ### Supported Tasks and Leaderboards We test three baselines, all supported in huggingface. They are VisualBERT [(Li et al. 2019)](https://arxiv.org/abs/1908.03557), LXMERT [(Tan and Bansal, 2019)](https://arxiv.org/abs/1908.07490) and ViLT [(Kim et al. 2021)](https://arxiv.org/abs/2102.03334). The leaderboard can be checked at [Papers With Code](https://paperswithcode.com/sota/visual-reasoning-on-vsr). model | random split | zero-shot :-------------|:-------------:|:-------------: *human* | *95.4* | *95.4* VisualBERT | 57.4 | 54.0 LXMERT | **72.5** | **63.2** ViLT | 71.0 | 62.4 ### Languages The language in the dataset is English as spoken by the annotators. The BCP-47 code for English is en. [`meta_data.csv`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data/data_files/meta_data.jsonl) contains meta data of annotators. ## Dataset Structure ### Data Instances Each line is an individual data point. Each `jsonl` file is of the following format: ```json {"image": "000000050403.jpg", "image_link": "http://images.cocodataset.org/train2017/000000050403.jpg", "caption": "The teddy bear is in front of the person.", "label": 1, "relation": "in front of", "annotator_id": 31, "vote_true_validator_id": [2, 6], "vote_false_validator_id": []} {"image": "000000401552.jpg", "image_link": "http://images.cocodataset.org/train2017/000000401552.jpg", "caption": "The umbrella is far away from the motorcycle.", "label": 0, "relation": "far away from", "annotator_id": 2, "vote_true_validator_id": [], "vote_false_validator_id": [2, 9, 1]} ``` ### Data Fields `image` denotes name of the image in COCO and `image_link` points to the image on the COCO server (so you can also access directly). `caption` is self-explanatory. `label` being `0` and `1` corresponds to False and True respectively. `relation` records the spatial relation used. `annotator_id` points to the annotator who originally wrote the caption. `vote_true_validator_id` and `vote_false_validator_id` are annotators who voted True or False in the second phase validation. ### Data Splits The VSR corpus, after validation, contains 10,119 data points with high agreement. On top of these, we create two splits (1) random split and (2) zero-shot split. For random split, we randomly split all data points into train, development, and test sets. Zero-shot split makes sure that train, development and test sets have no overlap of concepts (i.e., if *dog* is in test set, it is not used for training and development). Below are some basic statistics of the two splits. split | train | dev | test | total :------|:--------:|:--------:|:--------:|:--------: random | 7,083 | 1,012 | 2,024 | 10,119 zero-shot | 5,440 | 259 | 731 | 6,430 Check out [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for more details. ## Dataset Creation ### Curation Rationale Understanding spatial relations is fundamental to achieve intelligence. Existing vision-language reasoning datasets are great but they compose multiple types of challenges and can thus conflate different sources of error. The VSR corpus focuses specifically on spatial relations so we can have accurate diagnosis and maximum interpretability. ### Source Data #### Initial Data Collection and Normalization **Image pair sampling.** MS COCO 2017 contains 123,287 images and has labelled the segmentation and classes of 886,284 instances (individual objects). Leveraging the segmentation, we first randomly select two concepts, then retrieve all images containing the two concepts in COCO 2017 (train and validation sets). Then images that contain multiple instances of any of the concept are filtered out to avoid referencing ambiguity. For the single-instance images, we also filter out any of the images with instance area size < 30, 000, to prevent extremely small instances. After these filtering steps, we randomly sample a pair in the remaining images. We repeat such process to obtain a large number of individual image pairs for caption generation. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process **Fill in the blank: template-based caption generation.** Given a pair of images, the annotator needs to come up with a valid caption that makes it correctly describing one image but incorrect for the other. In this way, the annotator could focus on the key difference of the two images (which should be spatial relation of the two objects of interest) and come up with challenging relation that differentiates the two. Similar paradigms are also used in the annotation of previous vision-language reasoning datasets such as NLVR2 (Suhr et al., 2017, 2019) and MaRVL (Liu et al., 2021). To regularise annotators from writing modifiers and differentiating the image pair with things beyond accurate spatial relations, we opt for a template-based classification task instead of free-form caption writing. Besides, the template-generated dataset can be easily categorised based on relations and their meta-categories. The caption template has the format of “The `OBJ1` (is) __ the `OBJ2`.”, and the annotators are instructed to select a relation from a fixed set to fill in the slot. The copula “is” can be omitted for grammaticality. For example, for “contains”, “consists of”, and “has as a part”, “is” should be discarded in the template when extracting the final caption. The fixed set of spatial relations enable us to obtain the full control of the generation process. The full list of used relations are listed in the table below. It contains 71 spatial relations and is adapted from the summarised relation table of Fagundes et al. (2021). We made minor changes to filter out clearly unusable relations, made relation names grammatical under our template, and reduced repeated relations. In our final dataset, 65 out of the 71 available relations are actually included (the other 6 are either not selected by annotators or are selected but the captions did not pass the validation phase). | Category | Spatial Relations | |-------------|-------------------------------------------------------------------------------------------------------------------------------------------------| | Adjacency | Adjacent to, alongside, at the side of, at the right side of, at the left side of, attached to, at the back of, ahead of, against, at the edge of | | Directional | Off, past, toward, down, deep down*, up*, away from, along, around, from*, into, to*, across, across from, through*, down from | | Orientation | Facing, facing away from, parallel to, perpendicular to | | Projective | On top of, beneath, beside, behind, left of, right of, under, in front of, below, above, over, in the middle of | | Proximity | By, close to, near, far from, far away from | | Topological | Connected to, detached from, has as a part, part of, contains, within, at, on, in, with, surrounding, among, consists of, out of, between, inside, outside, touching | | Unallocated | Beyond, next to, opposite to, after*, among, enclosed by | **Second-round Human Validation.** Every annotated data point is reviewed by at least two additional human annotators (validators). In validation, given a data point (consists of an image and a caption), the validator gives either a True or False label. We exclude data points that have < 2/3 validators agreeing with the original label. In the guideline, we communicated to the validators that, for relations such as “left”/“right”, “in front of”/“behind”, they should tolerate different reference frame: i.e., if the caption is true from either the object’s or the viewer’s reference, it should be given a True label. Only when the caption is incorrect under all reference frames, a False label is assigned. This adds difficulty to the models since they could not naively rely on relative locations of the objects in the images but also need to correctly identify orientations of objects to make the best judgement. #### Who are the annotators? Annotators are hired from [prolific.co](https://prolific.co). We require them (1) have at least a bachelor’s degree, (2) are fluent in English or native speaker, and (3) have a >99% historical approval rate on the platform. All annotators are paid with an hourly salary of 12 GBP. Prolific takes an extra 33% of service charge and 20% VAT on the service charge. For caption generation, we release the task with batches of 200 instances and the annotator is required to finish a batch in 80 minutes. An annotator cannot take more than one batch per day. In this way we have a diverse set of annotators and can also prevent annotators from being fatigued. For second round validation, we group 500 data points in one batch and an annotator is asked to label each batch in 90 minutes. In total, 24 annotators participated in caption generation and 26 participated in validation. The annotators have diverse demographic background: they were born in 13 different countries; live in 13 different couturiers; and have 14 different nationalities. 57.4% of the annotators identify themselves as females and 42.6% as males. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This project is licensed under the [Apache-2.0 License](https://github.com/cambridgeltl/visual-spatial-reasoning/blob/master/LICENSE). ### Citation Information ```bibtex @article{Liu2022VisualSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier}, journal={ArXiv}, year={2022}, volume={abs/2205.00363} } ``` ### Contributions Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
3ee/regularization-architecture
--- license: mit tags: - stable-diffusion - regularization-images - text-to-image - image-to-image - dreambooth - class-instance - preservation-loss-training - architecture --- # Architecture Regularization Images A collection of regularization & class instance datasets of architecture for the Stable Diffusion 1.5 to use for DreamBooth prior preservation loss training.
Cohere/miracl-ja-queries-22-12
--- annotations_creators: - expert-generated language: - ja multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
bigcode/jupyter-parsed
--- dataset_info: features: - name: hexsha dtype: string - name: size dtype: int64 - name: ext dtype: string - name: lang dtype: string - name: max_stars_repo_path dtype: string - name: max_stars_repo_name dtype: string - name: max_stars_repo_head_hexsha dtype: string - name: max_stars_repo_licenses sequence: string - name: max_stars_count dtype: int64 - name: max_stars_repo_stars_event_min_datetime dtype: string - name: max_stars_repo_stars_event_max_datetime dtype: string - name: max_issues_repo_path dtype: string - name: max_issues_repo_name dtype: string - name: max_issues_repo_head_hexsha dtype: string - name: max_issues_repo_licenses sequence: string - name: max_issues_count dtype: int64 - name: max_issues_repo_issues_event_min_datetime dtype: string - name: max_issues_repo_issues_event_max_datetime dtype: string - name: max_forks_repo_path dtype: string - name: max_forks_repo_name dtype: string - name: max_forks_repo_head_hexsha dtype: string - name: max_forks_repo_licenses sequence: string - name: max_forks_count dtype: int64 - name: max_forks_repo_forks_event_min_datetime dtype: string - name: max_forks_repo_forks_event_max_datetime dtype: string - name: avg_line_length dtype: float64 - name: max_line_length dtype: int64 - name: alphanum_fraction dtype: float64 - name: cells sequence: sequence: sequence: string - name: cell_types sequence: string - name: cell_type_groups sequence: sequence: string splits: - name: train num_bytes: 22910808665 num_examples: 1459454 download_size: 9418947545 dataset_size: 22910808665 --- # Dataset Card for "jupyter-parsed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jerteh/SrpELTeC
--- license: cc-by-4.0 language: - sr field: sents pretty_name: Serbian Literary Text Collection size_categories: - 1M<n<10M task_categories: - text-generation tags: - literary --- SrpELTeC is a corpus of old Serbian novels for the first time published in the period 1840-1920. years of digitized within COST ACTION CO16204: Distant Reading for European Literary History, 2018-2022. The corpus includes 120 novels with 5,263.071 words, 22700 pages, 2557 chapters, 158,317 passages, 567 songs, 2972 verses, 803 segments in foreign language and 949 mentioned works. Dataset is constituted of two text files that can be loaded via: ```python from datasets import load_dataset dataset = load_dataset("jerteh/SrpELTeC") ``` Preview: ```python print(dataset["train"][100]) {'text': '„Mi besmo dobre gazde u selu, otac moj imao je kuću i sesiju zemlje, imali smo četiri vola, lepa četiri krstaša, rogovi u njih behu kao u jelena, ali još više u šire nego u jelena, a vrhovi behu bačeni natrag, dlaku na glavi imaše belu i kovrčavu, koja je skoro do očiju popadala bila na glavi, — pa posle sva ova četiri vola behu tako jednaka i u hodu i u veličini i debljini, da je to divota bilo pogledati — e ta divna ti je to stoka bila.'} ``` Citation: ```bibtex @inproceedings{stankovic2022distant, title={Distant Reading in Digital Humanities: Case Study on the Serbian Part of the ELTeC Collection}, author={Stankovi{\'c}, Ranka and Krstev, Cvetana and {\v{S}}andrih Todorovi{\'c}, Branislava and Vitas, Du{\v{s}}ko and {\v{S}}kori{\'c}, Mihailo and Ikoni{\'c} Ne{\v{s}}i{\'c}, Milica}, booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference}, pages={3337--3345}, year={2022} } ```
neuclir/csl
--- annotations_creators: - no-annotation language: - zh - en license: - apache-2.0 pretty_name: CSL size_categories: - 100K<n<1M source_datasets: - extended|csl tags: [] task_categories: - text-retrieval task_ids: - document-retrieval --- # Dataset Card for CSL ## Dataset Description CSL is the Chinese Scientific Literature Dataset. - **Paper:** https://aclanthology.org/2022.coling-1.344 - **Repository:** https://github.com/ydli-ai/CSL ### Dataset Summary The dataset contains titles, abstracts, keywords of papers written in Chinese from several academic fields. ### Languages - Chinese - English (translation) ## Dataset Structure ### Data Instances | Split | Documents | |-----------------|----------:| | `csl` | 396k | | `en_translation`| 396k | ### Data Fields - `doc_id`: unique identifier for this document - `title`: title of the paper - `abstract`: abstract of the paper - `keywords`: keywords associated with the paper - `category`: the broad category of the paper - `category_eng`: English translaction of the broad category (e.g., Engineering) - `discipline`: academic discipline of the paper - `discipline_eng`: English translation of the academic discipline (e.g., Agricultural Engineering) The `en_translation` contains documents translated from Google Translation service. All text are in English, so the fields `category_eng` and `discipline_eng` are omitted. ## Dataset Usage Using 🤗 Datasets: ```python from datasets import load_dataset dataset = load_dataset('neuclir/csl')['csl'] ``` ## License & Citation This dataset is based off the [Chinese Scientific Literature Dataset](https://github.com/ydli-ai/CSL) under Apache 2.0. The primay change is the addition of `doc_id`s, English translactions of the category and discipline descriptions by a native speaker, and basic de-duplication. Code that performed this modification is avalable in [this repository](https://github.com/NeuCLIR/csl-preprocess). If you use this data, please cite: ``` @inproceedings{li-etal-2022-csl, title = "{CSL}: A Large-scale {C}hinese Scientific Literature Dataset", author = "Li, Yudong and Zhang, Yuqing and Zhao, Zhe and Shen, Linlin and Liu, Weijie and Mao, Weiquan and Zhang, Hui", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.344", pages = "3917--3923", } ```
caijanfeng/hdflowmind
--- license: openrail --- we propose the Flowmind2digital method and the hdFlowmind dataset in this paper. The hdFlowmind is a dataset containing 1,776 hand-drawn and manually annotated flowminds, which considered a larger scope of 22 scenarios while having bigger quantity compared to previous works.