datasetId
stringlengths 2
81
| card
stringlengths 20
977k
|
---|---|
amphora/lmsys-finance | ---
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: conversation
dtype: string
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
dtype: string
- name: redacted
dtype: bool
- name: count
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 10328855
num_examples: 735
download_size: 3912614
dataset_size: 10328855
task_categories:
- conversational
language:
- en
tags:
- finance
size_categories:
- n<1K
---
# Dataset Card for "lmsys-finance"
This dataset is a curated version of the [lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset,
focusing solely on finance-related conversations. The refinement process encompassed:
1. Removing non-English conversations.
2. Selecting conversations from models: "vicuna-33b", "wizardlm-13b", "gpt-4", "gpt-3.5-turbo", "claude-2", "palm-2", and "claude-instant-1".
3. Excluding conversations with responses under 30 characters.
4. Using 100 financial keywords, choosing conversations with at least 10 keywords. |
wenge-research/yayi2_pretrain_data | ---
license: apache-2.0
language:
- zh
- en
size_categories:
- 100B<n<1T
---
## 介绍/Introduction
本数据集源自雅意训练语料,我们精选了约100B数据,数据大小约为500GB。我们期望通过雅意预训练数据的开源推动中文预训练大模型开源社区的发展,并积极为此贡献力量。通过开源,我们与每一位合作伙伴共同构建雅意大模型生态。
We opensource the pre-trained dataset in this release, it should contain more than 100B tokens depending on the tokenizer you use, requiring more than 500GB of local storage. By open-sourcing the pre-trained dataset, we aim to contribute to the development of the Chinese pre-trained large language model open-source community. Through open-source, we aspire to collaborate with every partner in building the YAYI large language model ecosystem.
## 组成
* 在预训练阶段,我们不仅使用了互联网数据来训练模型的语言能力,还添加了通用精选数据和领域数据,以增强模型的专业技能。通用精选数据包含人工收集和整理的高质量数据。涵盖了报纸类数据、文献类数据、APP类数据、代码类数据、书籍类数据、百科类数据。其中,报纸类数据包括广泛的新闻报道和专栏文章,这类数据通常结构化程度高,信息量丰富。文献类数据包括学术论文和研究报告,为我们的数据集注入了专业和深度。代码类数据包括各种编程语言的源码,有助于构建和优化技术类数据的处理模型。书籍类数据涵盖了小说、诗歌、古文、教材等内容,提供丰富的语境和词汇,增强语言模型的理解能力。数据分布情况如下:
* During the pre-training phase, we not only utilized internet data to train the model's language abilities but also incorporated curated general data and domain-specific information to enhance the model's expertise. Curated general data covers a wide range of categories including books (e.g., textbooks, novels), codes, encyclopedias, forums, academic papers, authoritative news, laws and regulations. Details of the data distribution are as follows:
![data distribution](https://huggingface.co/datasets/wenge-research/yayi2_pretrain_data/resolve/main/assets/data_distribution.jpg)
## 数据清洗
- 我们构建了一套全方位提升数据质量的数据处理流水线,包括标准化、启发式清洗、多级去重、毒性过滤四个模块。我们共收集了 240TB 原始数据,预处理后仅剩 10.6TB 高质量数据。整体流程如下:
- We establish a comprehensive data processing pipeline to enhance data quality in all aspects. This pipeline comprises four modules: normalizing, heuristic cleaning, multi-level deduplication, and toxicity filtering. 240 terabytes of raw data are collected for pre-training, and only 10.6 terabytes of high-quality data remain after preprocessing. Details of the data processing pipeline are as follows:
![data process](https://huggingface.co/datasets/wenge-research/yayi2_pretrain_data/resolve/main/assets/data_process.png)
## 协议/License
本项目中的代码依照 [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) 协议开源,社区使用 YAYI 2 模型和数据需要遵循[雅意YAYI 2 模型社区许可协议](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE)。若您需要将雅意 YAYI 2系列模型或其衍生品用作商业用途,请根据[《雅意 YAYI 2 模型商用许可协议》](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE)将商用许可申请登记信息发送至指定邮箱 [yayi@wenge.com](mailto:yayi@wenge.com)。审核通过后,雅意将授予您商用版权许可,请遵循协议中的商业许可限制。
The code in this project is open-sourced under the [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) license. The use of YaYi series model weights and data must adhere to the [YAYI 2 Community License](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE). If you intend to use the YAYI 2 series models or their derivatives for commercial purposes, please submit your commercial license application and registration information to [yayi@wenge.com](mailto:yayi@wenge.com), following the [YAYI 2 Commercial License](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE). Upon approval, YAYI will grant you a commercial copyright license, subject to the commercial license restrictions outlined in the agreement.
## 引用/Citation
如果您在工作中使用了我们的模型或者数据,请引用我们的论文。
If you are using the resource for your work, please cite our paper.
```
@article{YAYI 2,
author = {Yin Luo, Qingchao Kong, Nan Xu, et.al.},
title = {YAYI 2: Multilingual Open Source Large Language Models},
journal = {arXiv preprint arXiv:2312.14862},
url = {https://arxiv.org/abs/2312.14862},
year = {2023}
}
``` |
PipableAI/pip-txt-to-sql-spider-bird-dataset | ---
dataset_info:
features:
- name: schema
dtype: string
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 24825271
num_examples: 16428
download_size: 1320142
dataset_size: 24825271
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "spider-bird"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BoyaWu10/Bunny-v1_0-data | ---
license: apache-2.0
task_categories:
- visual-question-answering
- question-answering
language:
- en
pretty_name: Bunny-v1.0 Dataset
size_categories:
- 1M<n<10M
---
# Bunny-v1.0 Dataset Card
📖 [Technical report](https://arxiv.org/abs/2402.11530) | 🤗 [Bunny-v1.0-3B](https://huggingface.co/BAAI/Bunny-v1_0-3B) | 🏠 [Code](https://github.com/BAAI-DCAI/Bunny) | 🐰 [Demo](https://wisemodel.cn/space/baai/Bunny)
Bunny is a family of lightweight multimodal models.
Bunny-v1.0-data is the training dataset for Bunny-v1.0 series, including [Bunny-v1.0-3B](https://huggingface.co/BAAI/Bunny-v1_0-3B).
## Pretrain
We use a high-quality coreset with less duplicates and more informative samples of LAION-2B built by [this work](https://github.com/BAAI-DCAI/Dataset-Pruning/tree/main/LAION).
We randomly sample 2 million image-text pairs from the coreset and convert them to training format.
The pretraining data and images can be found in `pretrain` folder.
## Finetune
We build Bunny-695K by modifying [SVIT-mix-665K](https://arxiv.org/abs/2307.04087) for finetuning.
The finetuning data can be found in `finetune` folder.
## Usage
The images are packed into multiple packages.
After downloading the images, run the following script to merge them into one:
```shell
cat images.tar.gz.part-* > images.tar.gz
```
Then unpack the package with following command:
```shell
tar -xvzf images.tar.gz
```
## License
The content of this project itself is licensed under the Apache license 2.0. |
Doctor-Shotgun/theory-of-mind-dpo | ---
language:
- en
---
This is [grimulkan/theory-of-mind](https://huggingface.co/datasets/grimulkan/theory-of-mind) with "rejected" responses generated using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), and the file formatted for use in DPO training.
The code used to generate the dataset can be found in this repository: https://github.com/DocShotgun/LLM-datagen |
somosnlp/es-inclusive-language | ---
language:
- es
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: gender_exclusive
dtype: string
- name: gender_inclusive
dtype: string
- name: difficulty
dtype: string
- name: origin
dtype: string
splits:
- name: train
num_bytes: 630817
num_examples: 3212
- name: validation
num_bytes: 139222
num_examples: 721
- name: test
num_bytes: 50611
num_examples: 263
download_size: 397549
dataset_size: 820650
license: cc-by-nc-sa-4.0
---
# Dataset card for es-inclusive-language
Languages are powerful tools to communicate ideas, but their use is not impartial. The selection of words carries inherent biases and reflects subjective perspectives. In some cases, language is wielded to enforce ideologies, marginalize certain groups, or promote specific political agendas.
Spanish is not the exception to that. For instance, when we say “los alumnos” or “los ingenieros”, we are excluding women from those groups. Similarly, expressions such as “los gitanos” o “los musulmanes” perpetuate discrimination against these communities.
In response to these linguistic challenges, this dataset offers neutral alternatives in accordance with official guidelines on inclusive language from various Spanish speaking countries. Its purpose is to provide grammatically correct and inclusive solutions to situations where our language choices might otherwise be exclusive.
## Dataset Structure
This dataset consists of pairs of texts with one entry featuring exclusive language and the other one its corresponding inclusive rewrite. All pairs are tagged with the origin (source) of the data and, in order to account for completeness of inclusive translation, also with labels for translation difficulty.
### Difficulty tag descriptions
We used different labels, most of them gender related, and can be describe like this:
| Tag | Description | Example |
|-----------------------|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| no_cambia | No changes are needed | "Los alumnos Carlos y Manuel son muy problemáticos" cannot be translated as "El alumnado Carlos y Manuel son muy problemáticos” |
| plural_complejo | Plural words for which there is not a neutral term. There are different formulas that will vary according to the context. | "Los agricultores" -> "La comunidad agrícola", "Los y las agricultoras". “Las limpiadoras” -> “El equipo de limpieza”. More: "El grupo de...", "El sector de...", "El personal de..." |
| plural_neutro | Change the plural for a generic noun. | "Los alumnos" -> "El alumnado" |
| culturas | People and cultures | "Los andaluces" -> "El pueblo andaluz", "La comunidad andaluza" |
| feminizar_profesiones | Professions with androcentric feminine forms | “La médico” -> "La médica". “La técnico de sonido” -> "La técnica de sonido" |
| nombres_propios | Proper names | "Los alumnos Carlos y Manuel son muy problemáticos" cannot be translated as "El alumnado es muy problemático |
| persona_generica | Reference to a generic person | "Nota al lector" -> "Nota a quien lee", "Nota a la persona que lee" |
| dificultades_variadas | Mix of difficulties (to tag big chunks of diverse data) | |
| plurales | Mix of neutral and complex plurals | |
| falsa_concordancia | Androcentric agreement errors | "Estas siete parejas van a dar lo mejor de sí mismos" -> "Estas siete parejas van a dar lo mejor de sí mismas." |
| omision | The subject or some pronouns are omitted, or the phrase is restructured with verboids. | "los participantes mantendrán un debate" -> "habrá un debate", "Si los científicos trabajan adecuadamente" -> "Trabajando adecuadamente, "los estudiantes" -> "estudiantes |
| terminologia | Correction of terms with ableist, racist, or other types of discrimination bias. | |
| parafrasis | Avoid words with generic connotations by reformulating the phrase | |
| otros | Difficulties that don’t fit in the other labels | |
### Origin tags descriptions
Data quality can depend on their origin, so data are tagged with origin labels according to this table:
| Tag | Description | Link to origin |
|---------------------------|----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| neutral_es | Curated and refined version of neutral-es dataset | https://huggingface.co/datasets/hackathon-pln-es/neutral-es |
| GPT-3.5_fewshot | Chat GPT-3.5 generated with few shot technique | |
| GPT-3.5_CaDi <sup>*</sup> | Data created based on the dataset used for developing CaDi project<sup>*</sup> | https://lenguaje-incluyente.ibero.mx/ |
| GPT-3.5_fs_multiplication | Data multiplicated from GPT-3.5_fewshot using GPT-3.5 | |
| guia_CCGG | Examples from Spanish General Courts language inclusive Guide | https://www.congreso.es/docu/igualdad/Recomendaciones_uso_no_sexista_lenguajeCC.GG..pdf |
| guia_TAI | Examples from Trenes Argentinos' Guide to the use of inclusive language | https://www.argentina.gob.ar/sites/default/files/guia_para_uso_de_lenguaje_inclusivo_v1.pdf |
| guia_CONICET | Examples from Guide to inclusive, non-sexist language (CONICET) | https://cenpat.conicet.gov.ar/wp-content/uploads/sites/91/2020/08/Guia-lenguaje-inclusivo-no-sexista-CENPAT_final-1.pdf |
| guia_INAES | Examples of Guidelines for Inclusive Language Recommendations (INAES) | https://www.argentina.gob.ar/sites/default/files/2020/10/lenguaje_inclusivo_inaes_2021.pdf |
| guia_CHRYSALLIS | Examples from Practical Guide to Inclusive Language (Chrysallis) | https://www.lgbtqiahealtheducation.org/wp-content/uploads/2020/04/Guia-practica-de-lenguaje-inclusivo-Chrysallis.pdf |
| guia_ONU | Examples from Guidance for the use of gender-inclusive language (UN) | https://www.unwomen.org/sites/default/files/Headquarters/Attachments/Sections/Library/Gender-inclusive%20language/Guidelines-on-gender-inclusive-language-es.pdf |
| guia_MX | Examples from Manual for the use of inclusive and gender-sensitive language (MX) | https://www.gob.mx/cms/uploads/attachment/file/183695/Manual_Lenguaje_Incluyente_con_perspectiva_de_g_nero-octubre-2016.pdf |
| guia_CL | Examples from Gender Inclusive Language Guide of the Government of Chile | https://www.cultura.gob.cl/wp-content/uploads/2023/01/guia-de-lenguaje-inclusivo-de-genero.pdf |
| guia_IEM | Examples from Uso del Lenguaje Inclusivo de Género | https://secretariagenero.poder-judicial.go.cr/images/Documentos/LenguajeInclusivo/Documentos/Uso-de-lenguaje-inclusivo-de-Genero-IEM-UNA.pdf |
| human_combinatory | Combinatorics of text fragments generated with GPT3.5 | |
| GPT-4_human | Chat GPT-4 generated and human revised | |
| human | Human created | |
<sup>*</sup>©Universidad Iberoamericana, A.C. , Ciudad de México, México
<sup>*</sup>©Capitolina Díaz Martínez, Elvia María Guadalupe González del Pliego Dorantes, Marco Antonio López Hernández, Alberto López Medina, Héctor Celallos Avalos, Laura Mejía Hernández
## Data collection process
The data used for training the model has been sourced from various origins. The first and more important source was a curated and refined version of [es_neutral](https://huggingface.co/datasets/hackathon-pln-es/neutral-es)
In addition, we manually generated data based on Official Guidelines from different Spanish speaking countries. Finally, we augmented this data by experimenting with various prompts and Few-Shot learning techniques. We needed to be as explicit as possible, otherwise we wouldn’t get good results. For example:
![foto1.JPG](https://cdn-uploads.huggingface.co/production/uploads/65d9bf5b41325e422e9fa704/48ipmlxyEHgkNLxLvWnUp.jpeg)
![foto2.JPG](https://cdn-uploads.huggingface.co/production/uploads/65d9bf5b41325e422e9fa704/rwkDR3FrFyLLOMmofCMFI.jpeg)
![foto3.JPG](https://cdn-uploads.huggingface.co/production/uploads/65d9bf5b41325e422e9fa704/rHCV4UwitTbmQD0r2WS6V.jpeg)
We tried to be as inclusive as possible, paying close attention to the classification of difficulties that one could encounter in texts like these.
Moreover, we took care to incorporate numerous counterexamples, recognizing that there are instances where neutrality is not required in a sentence. For instance, “Las arquitectas María Nuñez y Rosa Loria presentaron el proyecto” should not be rewritten as “El equipo de arquitectura María Nuñez y Rosa Loria presentó el proyecto”.
It’s important to highlight that the Traductor Inclusivo not only promotes gender inclusivity but also addresses other forms of discrimination such as ableism, racism, xenophobia, and more.
### Sources
- [Recomendaciones para un uso no sexista del lenguaje en la Administracio n parlamentaria (España)](https://www.congreso.es/docu/igualdad/Recomendaciones_uso_no_sexista_lenguajeCC.GG..pdf)
- [Guía para uso de lenguaje inclusivo (Argentina)](https://www.argentina.gob.ar/sites/default/files/guia_para_uso_de_lenguaje_inclusivo_v1.pdf)
- [Guía de lenguaje inclusivo no sexista CCT CONICET-CENPAT (Argentina)](https://cenpat.conicet.gov.ar/wp-content/uploads/sites/91/2020/08/Guia-lenguaje-inclusivo-no-sexista-CENPAT_final-1.pdf)
- [Guía de recomendaciones para lenguaje inclusivo (Argentina)](https://www.argentina.gob.ar/sites/default/files/2020/10/lenguaje_inclusivo_inaes_2021.pdf)
- [Guía práctica de lenguaje inclusivo (España)](https://www.lgbtqiahealtheducation.org/wp-content/uploads/2020/04/Guia-practica-de-lenguaje-inclusivo-Chrysallis.pdf)
- [Guía para el uso de un lenguaje inclusivo al género (ONU)](https://www.unwomen.org/sites/default/files/Headquarters/Attachments/Sections/Library/Gender-inclusive%20language/Guidelines-on-gender-inclusive-language-es.pdf)
- [Manual para el uso de un lenguaje incluyente y con perspectiva de género (México)](https://www.gob.mx/cms/uploads/attachment/file/183695/Manual_Lenguaje_Incluyente_con_perspectiva_de_g_nero-octubre-2016.pdf)
- [Guía de lenguaje inclusivo de Género (Chile)](https://www.cultura.gob.cl/wp-content/uploads/2023/01/guia-de-lenguaje-inclusivo-de-genero.pdf)
- [Uso del Lenguaje Inclusivo de Género, IEM (Costa Rica)](https://secretariagenero.poder-judicial.go.cr/images/Documentos/LenguajeInclusivo/Documentos/Uso-de-lenguaje-inclusivo-de-Genero-IEM-UNA.pdf)
- [Uso no sexista de la lengua, UOC (España)](https://www.uoc.edu/portal/es/servei-linguistic/redaccio/tractament-generes/index.html)
- https://huggingface.co/datasets/hackathon-pln-es/neutral-es
## Bias
As bias is what we want to tackle, this corpus pays special attention to different types of discrimination, such as sexism, racism and ableism.
## Social Impact
An inclusive translator holds significant social impact by promoting equity and representation within texts. By rectifying biases ingrained in language and fostering inclusivity, it combats discrimination, amplifies the visibility of marginalized groups, and contributes to the cultivation of a more inclusive and respectful society.
## Team members
- **Gaia Quintana Fleitas** (gaiaq)
- **Andrés Martínez Fernández-Salguero** (andresmfs)
- **Imanuel Rozenberg** (manu_20392)
- **Miguel López** (wizmik12)
- **Josué Sauca** (josue_sauca) |
ncduy/mt-en-vi | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- vi
license:
- mit
multilinguality:
- translation
pretty_name: "Machine Translation Paired English-Vietnamese Sentences"
size_categories:
- 1M<n<10M
source_datasets:
- own
- open_subtitles
- tatoeba
- opus_tedtalks
- qed_amara
- opus_wikipedia
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for Machine Translation Paired English-Vietnamese Sentences
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is English ('en') and Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'en': 'And what I think the world needs now is more connections.',
'vi': 'Và tôi nghĩ điều thế giới đang cần bây giờ là nhiều sự kết nối hơn.',
'source': 'TED2020 v1'
}
```
### Data Fields
- `en` (str): English sentence
- `vi` (str): Vietnamese sentence
- `source` (str): Source.
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples |2884451| 11316| 11225|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ncduy0303](https://github.com/ncduy0303) for adding this dataset. |
huggan/anime-faces | ---
license: cc0-1.0
---
# Dataset Card for anime-faces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Repository:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** https://github.com/Mckinsey666
### Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.
Some outliers are still present in the dataset:
Bad cropping results
Some non-human faces.
Feel free to contribute to this dataset by adding images of similar quality or adding image labels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has a data folder with png files inside.
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
languages:
- unknown
licenses:
- unknown
multilinguality:
- unknown
pretty_name: anime-faces
size_categories:
- unknown
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
--- |
inverse-scaling/NeQA | ---
language:
- en
size_categories:
- 10K<n<100K
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: NeQA - Can Large Language Models Understand Negation in Multi-choice Questions?
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
train-eval-index:
- config: inverse-scaling--NeQA
task: text-generation
task_id: text_zero_shot_classification
splits:
eval_split: train
col_mapping:
prompt: text
classes: classes
answer_index: target
---
## NeQA: Can Large Language Models Understand Negation in Multi-choice Questions? (Zhengping Zhou and Yuhui Zhang)
### General description
This task takes an existing multiple-choice dataset and negates a part of each question to see if language models are sensitive to negation. The authors find that smaller language models display approximately random performance whereas the performance of larger models become significantly worse than random.
Language models failing to follow instructions in the prompt could be a serious issue that only becomes apparent on a task once models are sufficiently capable to perform non-randomly on the task.
### Example
The following are multiple choice questions (with answers) about common sense.
Question: If a cat has a body temp that is below average, it isn't in
A. danger
B. safe ranges
Answer:
(where the model should choose B.)
## Submission details
### Task description
Negation is a common linguistic phenomenon that can completely alter the semantics of a sentence by changing just a few words.
This task evaluates whether language models can understand negation, which is an important step towards true natural language understanding.
Specifically, we focus on negation in open-book multi-choice questions, considering its wide range of applications and the simplicity of evaluation.
We collect a multi-choice question answering dataset, NeQA, that includes questions with negations.
When negation is presented in the question, the original correct answer becomes wrong, and the wrong answer becomes correct.
We use the accuracy metric to examine whether the model can understand negation in the questions and select the correct answer given the presence of negation.
We observe a clear inverse scaling trend on GPT-3, demonstrating that larger language models can answer more complex questions but fail at the last step to understanding negation.
### Dataset generation procedure
The dataset is created by applying rules to transform questions in a publicly available multiple-choice question answering dataset named OpenBookQA. We use a simple rule by filtering questions containing "is" and adding "not" after it. For each question, we sample an incorrect answer as the correct answer and treat the correct answer as the incorrect answer. We randomly sample 300 questions and balance the label distributions (50% label as "A" and 50% label as "B" since there are two choices for each question)..
### Why do you expect to see inverse scaling?
For open-book question answering, larger language models usually achieve better accuracy because more factual and commonsense knowledge is stored in the model parameters and can be used as a knowledge base to answer these questions without context.
A higher accuracy rate means a lower chance of choosing the wrong answer. Can we change the wrong answer to the correct one? A simple solution is to negate the original question. If the model cannot understand negation, it will still predict the same answer and, therefore, will exhibit an inverse scaling trend.
We expect that the model cannot understand negation because negation introduces only a small perturbation to the model input. It is difficult for the model to understand that this small perturbation leads to completely different semantics.
### Why is the task important?
This task is important because it demonstrates that current language models cannot understand negation, a very common linguistic phenomenon and a real-world challenge to natural language understanding.
Why is the task novel or surprising? (1+ sentences)
To the best of our knowledge, no prior work shows that negation can cause inverse scaling. This finding should be surprising to the community, as large language models show an incredible variety of emergent capabilities, but still fail to understand negation, which is a fundamental concept in language.
## Results
[Inverse Scaling Prize: Round 1 Winners announcement](https://www.alignmentforum.org/posts/iznohbCPFkeB9kAJL/inverse-scaling-prize-round-1-winners#Zhengping_Zhou_and_Yuhui_Zhang__for_NeQA__Can_Large_Language_Models_Understand_Negation_in_Multi_choice_Questions_)
|
tasksource/cycic_multiplechoice | ---
license: apache-2.0
task_categories:
- multiple-choice
language:
- en
---
https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` |
anon8231489123/Omegle_logs_dataset | ---
license: apache-2.0
language:
- en
---
~10k conversations from Omegle. Scraped using: http://web.archive.org/cdx/search/xd?url=logs.omegle.com/*&fl=timestamp,original,statuscode&output=json. For these logs to have ended up on the cdx, it means the url was posted publicly at some point.
* PII removed by searching for conversations with these words: forbidden_words = ["kik", "telegram", "skype", "wickr", "discord", "dropbox", "insta ", "insta?", "instagram", "snap ", "snapchat"].
* Conversations with racial slurs removed.
* English only.
* Obviously, the dataset still contains a lot of (sometimes extreme) NSFW content. Do not view or use this dataset if you are under 18.
General process for scraping (There are probably other datasets that can be scraped using this method):
1. Go to page in archive.org cdx
2. Check if the page contains a log
3. Download the log image
4. Use OCR to read it
5. Save it to a json file.
This dataset could be useful for training casual conversational AI's but it likely still requires more filtering. Use at your own risk. |
kkcosmos/instagram-images-with-captions | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: item_id
dtype: int64
- name: image_path
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3516512787.609302
num_examples: 28360
- name: test
num_bytes: 391305063.2386981
num_examples: 3152
download_size: 0
dataset_size: 3907817850.848
---
|
truehealth/medicationqa | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Focus (Drug)
dtype: string
- name: Question Type
dtype: string
- name: Answer
dtype: string
- name: Section Title
dtype: string
- name: URL
dtype: string
splits:
- name: train
num_bytes: 403030
num_examples: 690
download_size: 0
dataset_size: 403030
---
# Dataset Card for "medicationqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nampdn-ai/tiny-webtext | ---
license: mit
viewer: true
task_categories:
- text-generation
language:
- en
pretty_name: Tiny WebText
size_categories:
- 1M<n<10M
source_datasets:
- tiiuae/falcon-refinedweb
---
# Tiny WebText
The Tiny WebText dataset is designed to help models learn about perception on web text while neutralizing the bias of the source text using critical thinking methods. By providing a rich and diverse set of texts, I aim to improve the ability of models to understand and analyze information in a more objective and unbiased manner.
This dataset can be used to train and evaluate natural language processing and machine learning models, with the goal of improving their perception and critical thinking skills. It is a valuable resource for researchers and developers, especially those working in the fields of machine learning and data engineering.
The dataset is augmented using subset of [Falcon-RefinedWeb](https://arxiv.org/abs/2306.01116), which provides additional augmented text using [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) model to improve the quality and diversity of the texts.
I welcome any feedback or contributions. Thank you for your interest in my work! |
Shrey-1329/cxiu_hf_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1108945726.54
num_examples: 6060
download_size: 1108991167
dataset_size: 1108945726.54
task_categories:
- image-to-text
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
# Indiana University Chest Xray Dataset Card
## Data sources:
This is a converted and processed version of the open access pneumonia chest x-ray dataset provided by the indiana university.<br>
You can see its information page [here](https://openi.nlm.nih.gov/faq).<br>
The compressed images in the png format were downloaded from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_png.tgz) and the corresponding reports from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_reports.tgz).
## Data fields:
There are two fields: image and text.
The images are the x-rays and the texts are their associated findings.
## Preprocessing done:
1. **Make all text lowercase**: Convert all text to lowercase to ensure consistent and case-insensitive processing.
2. **Remove all punctuation**: Eliminate any punctuation marks (e.g., periods, commas, exclamation marks) from the text to avoid interference in language analysis.
3. **Remove all numbers**: Eliminate all numeric characters from the text since they might not be relevant for certain natural language processing tasks.
4. **Remove all words with 2 or more Xs in a row**: Remove any words that contain two or more consecutive occurrences of the letter "X" as they may not contribute meaningful information.
5. **Remove the bottom and top 2% of text by length**: Discard the shortest and longest text samples, removing the bottom 2% and top 2% of the text's length, respectively. This step is aimed at reducing the impact of outliers and ensuring a more balanced dataset. |
larryvrh/PIPPA-TavernFormat | ---
dataset_info:
features:
- name: categories
sequence: string
- name: name
dtype: string
- name: description
dtype: string
- name: first_msg
dtype: string
- name: personality
dtype: string
- name: example_dialogues
sequence: string
- name: conversation
list:
- name: is_human
dtype: bool
- name: message
dtype: string
splits:
- name: train
num_bytes: 174673097
num_examples: 11841
download_size: 88204818
dataset_size: 174673097
license: agpl-3.0
task_categories:
- conversational
language:
- en
tags:
- not-for-all-audiences
- roleplay
- conversational
size_categories:
- 10K<n<100K
---
# Dataset Card for "PIPPA_TavernFormat"
Converted from the deduped version (pippa_deduped.jsonl) of [PygmalionAI/PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA?not-for-all-audiences=true).
Since the CAI format and the Tavern format does not align exactly, there maybe some mismatches between fields, especially character description and personality. |
CollectiveCognition/chats-data-2023-09-27 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT |
alexandrainst/nst-da | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: age
dtype: int64
- name: sex
dtype: string
- name: dialect
dtype: string
- name: recording_datetime
dtype: string
splits:
- name: train
num_bytes: 55199435558.0
num_examples: 182605
- name: test
num_bytes: 8894080220.0
num_examples: 54747
download_size: 5358057252
dataset_size: 64093515778.0
size_categories:
- 100K<n<1M
license: cc0-1.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- da
pretty_name: NST-da
---
# Dataset Card for NST-da
## Dataset Description
- **Repository:** <https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-55/>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 5.36 GB
- **Size of the generated dataset:** 64.09 GB
- **Total amount of disk used:** 69.45 GB
### Dataset Summary
This dataset is an upload of the [NST Danish ASR Database (16 kHz) – reorganized](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-55/).
The training and test splits are the original ones.
### Supported Tasks and Leaderboards
Training automatic speech recognition is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 5.36 GB
- **Size of the generated dataset:** 64.09 GB
- **Total amount of disk used:** 69.45 GB
An example from the dataset looks as follows.
```
{
'audio': {
'path': 'dk14x404-05072000-1531_u0008121.wav',
'array': array([ 0.00265503, 0.00248718, 0.00253296, ..., -0.00030518,
-0.00035095, -0.00064087]),
'sampling_rate': 16000
},
'text': 'Desuden er der en svømmeprøve, en fremmedsprogstest samt en afsluttende samtale.',
'speaker_id': 404,
'age': 24,
'sex': 'Female',
'dialect': 'Storkøbenhavn',
'recording_datetime': '2000-07-05T15:31:14'
}
```
### Data Fields
The data fields are the same among all splits.
- `audio`: an `Audio` feature.
- `text`: a `string` feature.
- `speaker_id`: an `int64` feature.
- `age`: an `int64` feature.
- `sex`: a `string` feature.
- `dialect`: a `string` feature.
- `recording_datetime`: a `string` feature.
### Dataset Statistics
There are 183,205 samples in the training split, and 54,747 samples in the test split.
#### Speakers
There are 539 unique speakers in the training dataset and 56 unique speakers in the test dataset, where 54 of them are also present in the training set.
#### Age Distribution
![nst-da-age-distribution.png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/iNMmVXXda7LtzgZEHe1eq.png)
#### Dialect Distribution
![nst-da-dialect-distribution.png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/sckW27xYUz8apMwbLebvD.png)
#### Sex Distribution
![nst-da-sex-distribution.png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/8Q7ZunYaLQ3laOc7yQvI8.png)
#### Transcription Length Distribution
![nst-da-length-distribution.png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/W_LA2nydEZuEeK_Z_x2LE.png)
## Dataset Creation
### Curation Rationale
There are not many large-scale ASR datasets in Danish.
### Source Data
The data originates from the now bankrupt company Nordisk språkteknologi (NST), whose data was transferred to the National Library of Norway, who subsequently released it into the public domain.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) reorganised the dataset and uploaded it to the Hugging Face Hub.
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/). |
abhinand/tamil-alpaca | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 287556653
num_examples: 51876
download_size: 0
dataset_size: 287556653
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: gpl-3.0
task_categories:
- text-generation
language:
- ta
pretty_name: tamil-alpaca
size_categories:
- 10K<n<100K
---
# Dataset Card for "tamil-alpaca"
This repository includes a Tamil-translated version of the [Alpaca dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned).
This dataset is part of the release of Tamil LLaMA family of models – an important step in advancing LLMs for the Tamil language. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact.
**GitHub Repository:** [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama)
## Models trained using this dataset
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ComponentSoft/k8s-kubectl-cot-20k | ---
dataset_info:
features:
- name: objective
dtype: string
- name: command_name
dtype: string
- name: command
dtype: string
- name: description
dtype: string
- name: syntax
dtype: string
- name: flags
list:
- name: default
dtype: string
- name: description
dtype: string
- name: option
dtype: string
- name: short
dtype: string
- name: question
dtype: string
- name: chain_of_thought
dtype: string
splits:
- name: train
num_bytes: 51338358
num_examples: 19661
download_size: 0
dataset_size: 51338358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "k8s-kubectl-cot-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Djacon/ru-izard-emotions | ---
language:
- ru
license:
- mit
multilinguality:
- russian
task_categories:
- text-classification
task_ids:
- sentiment-classification
- multi-class-classification
- multi-label-classification
pretty_name: RuIzardEmotions
tags:
- emotion
size_categories:
- 10K<n<100K
---
# Dataset Card for RuIzardEmotions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Summary
The RuIzardEmotions dataset is a high-quality translation of the [go-emotions](https://huggingface.co/datasets/go_emotions) dataset and the other [emotion-detection](https://www.kaggle.com/datasets/ishantjuyal/emotions-in-text/data) dataset. It contains 30k Reddit comments labeled for 10 emotion categories (__joy__, __sadness__, __anger__, __enthusiasm__, __surprise__, __disgust__, __fear__, __guilt__, __shame__ and __neutral__).
The datasets were translated using the accurate translator [DeepL](https://www.deepl.com/translator) and additional processing. The idea for the dataset was inspired by the [Izard's model](https://en.wikipedia.org/wiki/Differential_Emotions_Scale) of human emotions.
The dataset already with predefined train/val/test splits.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class, multi-label emotion classification.
### Languages
The data is in Russian.
## Dataset Structure
### Data Instances
Each instance is a reddit comment with one or more emotion annotations (or neutral).
### Data Splits
The simplified data includes a set of train/val/test splits with 24k, 3k, and 3k examples respectively.
## Considerations for Using the Data
### Social Impact of Dataset
Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
pricing, and student attentiveness
## Additional Information
### Licensing Information
The GitHub repository which houses this dataset has an
[Apache License 2.0](https://github.com/Djacon/russian-emotion-detection/blob/main/LICENSE).
### Citation Information
```
@inproceedings{Djacon,
author={Djacon},
title={RuIzardEmotions: A Dataset of Fine-Grained Emotions},
year={2023}
}
``` |
pixparse/cc12m-wds | ---
license: other
license_name: conceptual-12m
license_link: LICENSE
task_categories:
- image-to-text
size_categories:
- 10M<n<100M
---
# Dataset Card for Conceptual Captions 12M (CC12M)
## Dataset Description
- **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m)
- **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981)
- **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
### Dataset Summary
Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.
Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).
### Usage
This instance of Conceptual Captions is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or upcoming releases of Hugging Face `datasets`.
...More Detail TBD
### Data Splits
This dataset was downloaded using img2dataset. Images resized on download if shortest edge > 512 to shortest edge = 512.
#### Train
* `cc12m-train-*.tar`
* Downloaded on 2021/18/22
* 2176 shards, 10968539 samples
## Additional Information
### Dataset Curators
Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{changpinyo2021cc12m,
title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle = {CVPR},
year = {2021},
}
``` |
dataautogpt3/Dalle3 | ---
license: mit
---
10,000 high-quality captions with image pairs produced by dalle3 with a raw.zip incase i uploaded it wrong. |
lovesnowbest/T-Eval | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
- zh
tags:
- code
pretty_name: teval
size_categories:
- 100M<n<1B
---
# T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step
[![arXiv](https://img.shields.io/badge/arXiv-2312.14033-b31b1b.svg)](https://arxiv.org/abs/2312.14033)
[![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](./LICENSE)
## ✨ Introduction
This is an evaluation harness for the benchmark described in [T-Eval: Evaluating the Tool Utilization Capability of Large Language Models Step by Step](https://arxiv.org/abs/2312.14033).
[[Paper](https://arxiv.org/abs/2312.14033)]
[[Project Page](https://open-compass.github.io/T-Eval/)]
[[LeaderBoard](https://open-compass.github.io/T-Eval/leaderboard.html)]
[[HuggingFace](https://huggingface.co/datasets/lovesnowbest/T-Eval)]
> Large language models (LLM) have achieved remarkable performance on various NLP tasks and are augmented by tools for broader applications. Yet, how to evaluate and analyze the tool utilization capability of LLMs is still under-explored. In contrast to previous works that evaluate models holistically, we comprehensively decompose the tool utilization into multiple sub-processes, including instruction following, planning, reasoning, retrieval, understanding, and review. Based on that, we further introduce T-Eval to evaluate the tool-utilization capability step by step. T-Eval disentangles the tool utilization evaluation into several sub-domains along model capabilities, facilitating the inner understanding of both holistic and isolated competency of LLMs. We conduct extensive experiments on T-Eval and in-depth analysis of various LLMs. T-Eval not only exhibits consistency with the outcome-oriented evaluation but also provides a more fine-grained analysis of the capabilities of LLMs, providing a new perspective in LLM evaluation on tool-utilization ability.
<!--
[T-Eval: ]()<br>
Zehui Chen<sup>♠</sup>, Weihua Du<sup>♠</sup>, Wenwei Zhang<sup>♠</sup>, Kuikun Liu, Jiangning Liu, Miao Zheng, Jingming Zhuo, Songyang Zhang, Dahua Lin, Kai Chen<sup>♦</sup>, Feng Zhao<sup>♦</sup>
<sup>♠</sup> Equal Contribution<br>
<sup>♦</sup> Corresponding Author -->
<div>
<center>
<img src="figs/teaser.png">
</div>
## 🚀 What's New
- **[2024.02.18]** Release new [data](https://drive.google.com/file/d/1nQ0pn26qd0FGU8UkfSTxNdu6uWI0QXTY/view?usp=sharing) (both Chinese and English) and code for faster inference!🚀🚀🚀 The leaderboard will be updated soon! We also provide template examples for reference.
- **[2024.01.08]** Release [ZH Leaderboard](https://open-compass.github.io/T-Eval/leaderboard_zh.html) and ~~[ZH data](https://drive.google.com/file/d/1z25duwZAnBrPN5jYu9-8RMvfqnwPByKV/view?usp=sharing)~~, where the questions and answer formats are in Chinese. (公布了中文评测数据集和榜单)✨✨✨
- **[2023.12.22]** Paper available on [ArXiv](https://arxiv.org/abs/2312.14033). 🔥🔥🔥
- **[2023.12.21]** Release the test scripts and data for T-Eval. 🎉🎉🎉
## 🧾 TODO
- [x] Change the role of function response from `system` to `function`.
- [x] Merge consecutive same role conversations.
- [x] Provide template configs for open-sourced models.
- [x] Provide dev set for T-Eval, reducing the evaluation time.
- [x] Optimize the inference pipeline of huggingface model provided by Lagent, which will be 3x faster. **(Please upgrade Lagent to v0.2)**
- [ ] Support inference on Opencompass.
~~NOTE: These TODOs will be started after 2024.2.1~~ Thanks for your patience!
## 🛠️ Preparations
```bash
$ git clone https://github.com/open-compass/T-Eval.git
$ cd T-Eval
$ pip install requirements.txt
```
## 🛫️ Get Started
We support both API-based models and HuggingFace models via [Lagent](https://github.com/InternLM/lagent).
### 💾 Test Data
We provide both google drive & huggingface dataset to download test data:
1. Google Drive
~~[[EN data](https://drive.google.com/file/d/1ebR6WCCbS9-u2x7mWpWy8wV_Gb6ltgpi/view?usp=sharing)] (English format) [[ZH data](https://drive.google.com/file/d/1z25duwZAnBrPN5jYu9-8RMvfqnwPByKV/view?usp=sharing)] (Chinese format)~~
[T-Eval Data](https://drive.google.com/file/d/1nQ0pn26qd0FGU8UkfSTxNdu6uWI0QXTY/view?usp=sharing)
2. HuggingFace Datasets
You can also access the dataset through huggingface via this [link](https://huggingface.co/datasets/lovesnowbest/T-Eval).
```python
from datasets import load_dataset
dataset = load_dataset("lovesnowbest/T-Eval")
```
After downloading, please put the data in the `data` folder directly:
```
- data/
- instruct_v2.json
- plan_json_v2.json
...
```
### 🤖 API Models
1. Set your OPENAI key in your environment.
```bash
export OPENAI_API_KEY=xxxxxxxxx
```
2. Run the model with the following scripts
```bash
# test all data at once
sh test_all_en.sh api gpt-4-1106-preview gpt4
# test ZH dataset
sh test_all_zh.sh api gpt-4-1106-preview gpt4
# test for Instruct only
python test.py --model_type api --model_path gpt-4-1106-preview --resume --out_name instruct_gpt4.json --out_dir work_dirs/gpt4/ --dataset_path data/instruct_v2.json --eval instruct --prompt_type json
```
### 🤗 HuggingFace Models
1. Download the huggingface model to your local path.
2. Modify the `meta_template` json according to your tested model.
3. Run the model with the following scripts
```bash
# test all data at once
sh test_all_en.sh hf $HF_PATH $HF_MODEL_NAME $META_TEMPLATE
# test ZH dataset
sh test_all_zh.sh hf $HF_PATH $HF_MODEL_NAME $META_TEMPLATE
# test for Instruct only
python test.py --model_type hf --model_path $HF_PATH --resume --out_name instruct_$HF_MODEL_NAME.json --out_dir data/work_dirs/ --dataset_path data/instruct_v1.json --eval instruct --prompt_type json --model_display_name $HF_MODEL_NAME --meta_template $META_TEMPLATE
```
### 💫 Final Results
Once you finish all tested samples, a detailed evluation results will be logged at `$out_dir/$model_display_name/$model_display_name_-1.json` (For ZH dataset, there is a `_zh` suffix). To obtain your final score, please run the following command:
```bash
python teval/utils/convert_results.py --result_path $out_dir/$model_display_name/$model_display_name_-1.json
```
## 🔌 Protocols
T-Eval adopts multi-conversation style evaluation to gauge the model. The format of our saved prompt is as follows:
```python
[
{
"role": "system",
"content": "You have access to the following API:\n{'name': 'AirbnbSearch.search_property_by_place', 'description': 'This function takes various parameters to search properties on Airbnb.', 'required_parameters': [{'name': 'place', 'type': 'STRING', 'description': 'The name of the destination.'}], 'optional_parameters': [], 'return_data': [{'name': 'property', 'description': 'a list of at most 3 properties, containing id, name, and address.'}]}\nPlease generate the response in the following format:\ngoal: goal to call this action\n\nname: api name to call\n\nargs: JSON format api args in ONLY one line\n"
},
{
"role": "user",
"content": "Call the function AirbnbSearch.search_property_by_place with the parameter as follows: 'place' is 'Berlin'."
}
]
```
where `role` can be ['system', 'user', 'assistant'], and `content` must be in string format. Before infering it by a LLM, we need to construct it into a raw string format via `meta_template`. `meta_template` examples are provided at [meta_template.py](teval/utils/meta_template.py):
```python
[
dict(role='system', begin='<|System|>:', end='\n'),
dict(role='user', begin='<|User|>:', end='\n'),
dict(
role='assistant',
begin='<|Bot|>:',
end='<eoa>\n',
generate=True)
]
```
You need to specify the `begin` and `end` token based on your tested huggingface model at [meta_template.py](teval/utils/meta_template.py) and specify the `meta_template` args in `test.py`, same as the name you set in the `meta_template.py`. As for OpenAI model, we will handle that for you.
## 📊 Benchmark Results
More detailed and comprehensive benchmark results can refer to 🏆 [T-Eval official leaderboard](https://open-compass.github.io/T-Eval/leaderboard.html) !
<div>
<center>
<img src="figs/teval_results.png">
</div>
### ✉️ Submit Your Results
You can submit your inference results (via running test.py) to this [email](lovesnow@mail.ustc.edu.cn). We will run your predictions and update the results in our leaderboard. Please also provide the scale of your tested model. A sample structure of your submission should be like:
```
$model_display_name/
instruct_$model_display_name/
query_0_1_0.json
query_0_1_1.json
...
plan_json_$model_display_name/
plan_str_$model_display_name/
...
```
## ❤️ Acknowledgements
T-Eval is built with [Lagent](https://github.com/InternLM/lagent) and [OpenCompass](https://github.com/open-compass/opencompass). Thanks for their awesome work!
## 🖊️ Citation
If you find this project useful in your research, please consider cite:
```
@article{chen2023t,
title={T-Eval: Evaluating the Tool Utilization Capability Step by Step},
author={Chen, Zehui and Du, Weihua and Zhang, Wenwei and Liu, Kuikun and Liu, Jiangning and Zheng, Miao and Zhuo, Jingming and Zhang, Songyang and Lin, Dahua and Chen, Kai and others},
journal={arXiv preprint arXiv:2312.14033},
year={2023}
}
```
## 💳 License
This project is released under the Apache 2.0 [license](./LICENSE). |
5CD-AI/Vietnamese-meta-math-MetaMathQA-40K-gg-translated | ---
task_categories:
- question-answering
language:
- en
- vi
tags:
- math
- math-qa
size_categories:
- 10K<n<100K
--- |
UrbanSyn/UrbanSyn | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
- image-segmentation
- depth-estimation
language:
- en
tags:
- Urban Scenario
- Autonomous Driving
- Synthethic data
size_categories:
- 1K<n<10K
pretty_name: UrbanSyn
---
<img src="front_urbanSyn.png" width=100% />
# UrbanSyn Dataset
UrbanSyn is an open synthetic dataset featuring photorealistic driving scenes. It contains ground-truth annotations for semantic segmentation, scene depth, panoptic instance segmentation, and 2-D bounding boxes. Website [https://urbansyn.org](https://urbansyn.org)
## Overview
UrbanSyn is a diverse, compact, and photorealistic dataset that provides more than 7.5k synthetic annotated images. It was born to address the synth-to-real domain gap, contributing to unprecedented synthetic-only baselines used by domain adaptation (DA) methods.
**- Reduce the synth-to-real domain gap**
UrbanSyn dataset helps to reduce the domain gap by contributing to unprecedented synthetic-only baselines used by domain adaptation (DA) methods.
**- Ground-truth annotations**
UrbanSyn comes with photorealistic color images, per-pixel semantic segmentation, depth, instance panoptic segmentation, and 2-D bounding boxes.
**- Open for research and commercial purposes**
UrbanSyn may be used for research and commercial purposes. It is released publicly under the Creative Commons Attribution-Commercial-ShareAlike 4.0 license.
**- High-degree of photorealism**
UrbanSyn features highly realistic and curated driving scenarios leveraging procedurally-generated content and high-quality curated assets. To achieve UrbanSyn photorealism we leverage industry-standard unbiased path-tracing and AI-based denoising techniques.
## White Paper
**[[Arxiv]](https://arxiv.org/abs/2312.12176)**
When using or referring to the UrbanSyn dataset in your research, please cite our white paper:
```BibTeX
@misc{gomez2023one,
title={All for One, and One for All: UrbanSyn Dataset, the third Musketeer of Synthetic Driving Scenes},
author={Jose L. Gómez and Manuel Silva and Antonio Seoane and Agnès Borrás and Mario Noriega and Germán Ros and Jose A. Iglesias-Guitian and Antonio M. López},
year={2023},
eprint={2312.12176},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Terms of Use
The UrbanSyn Dataset is provided by the Computer Vision Center (UAB) and CITIC (University of A Coruña).
UrbanSyn may be used for research and commercial purposes, and it is subject to the Creative Commons Attribution-Commercial-ShareAlike 4.0. A summary of the CC-BY-SA 4.0 licensing terms can be found **[[here]](https://creativecommons.org/licenses/by-sa/4.0/deed.en)**.
Due to constraints from our asset providers for UrbanSyn, we prohibit the use of generative AI technologies for reverse engineering any assets or creating content for stock media platforms based on the UrbanSyn dataset.
While we strive to generate precise data, all information is presented 'as is' without any express or implied warranties. We explicitly disclaim all representations and warranties regarding the validity, scope, accuracy, completeness, safety, or utility of the licensed content, including any implied warranties of merchantability, fitness for a particular purpose, or otherwise.
## Acknowledgements
Funded by Grant agreement PID2020-115734RB-C21 "SSL-ADA" and Grant agreement PID2020-115734RB-C22 "PGAS-ADA"
<img src="MICINN_Gob_AEI_1.jpg" width="300" />
## For more information about our team members and how to contact us, visit our website [https://urbansyn.org](https://urbansyn.org)
## Folder structure and content
- ```rgb```: contains RGB images with a resolution of 2048x1024 in PNG format.
- ```ss and ss_colour``` : contains the pixel-level semantic segmentation labels in grayscale (value = Class ID) and colour (value = Class RGB) respectively in PNG format. We follow the 19 training classes defined on Cityscapes:
| name | trainId | color |
|----------------------|---------|-----------------|
| 'road' | 0 | (128, 64,128) |
| 'sidewalk' | 1 | (244, 35,232) |
| 'building' | 2 | ( 70, 70, 70) |
| 'wall' | 3 | (102,102,156) |
| 'fence' | 4 | (190,153,153) |
| 'pole' | 5 | (153,153,153) |
| 'traffic light' | 6 | (250,170, 30) |
| 'traffic sign' | 7 | (220,220, 0) |
| 'vegetation' | 8 | (107,142, 35) |
| 'terrain' | 9 | (152,251,152) |
| 'sky' | 10 | ( 70,130,180) |
| 'person' | 11 | (220, 20, 60) |
| 'rider' | 12 | (255, 0, 0) |
| 'car' | 13 | ( 0, 0,142) |
| 'truck' | 14 | ( 0, 0, 70) |
| 'bus' | 15 | ( 0, 60,100) |
| 'train' | 16 | ( 0, 80,100) |
| 'motorcycle' | 17 | ( 0, 0,230) |
| 'bicycle' | 18 | (119, 11, 32) |
| 'unlabeled' | 19 | ( 0, 0, 0) |
- ```panoptic```: contains the instance segmentation of the dynamic objects of the image in PNG format. Each instance is codified using the RGB channels, where RG corresponds to the instance number and B to the class ID. Dynamic objects are Person, Rider, Car, Truck, Bus, Train, Motorcycle and Bicycle.
- ```bbox2D```: contains the 2D bounding boxes and Instances information for all the dynamic objects in the image up to 110 meters of distance from the camera and bigger than 150 pixels. We provide the annotations in a json file with the next structure:
- bbox: provides the bounding box size determined by the top left corner (xMin, yMin) and Bottom right corner (xMax, YMax).
- color: corresponds to the colour of the instance in the panoptic instance segmentation map inside panoptic folder.
- label: defines the class name
- occlusion_percentage: provides the occlusion percentatge of the object. Being 0 not occluded and 100 fully occluded.
- ```depth```: contains the depth map of the image in EXR format.
## Download locally with huggingface_hub library
- [Install huggingface_hub library](https://huggingface.co/docs/huggingface_hub/installation)
- You can download the dataset on Python this way:
``` from huggingface_hub import snapshot_download ```
``` snapshot_download(repo_id="UrbanSyn/UrbanSyn", repo_type="dataset") ```
- More information about how to download and additional options can be found [here](https://huggingface.co/docs/huggingface_hub/guides/download) |
starsnatched/MemGPT-Functions-DPO | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
tags:
- memgpt
- function calling
- function
size_categories:
- 1K<n<10K
---
I hand crafted these. For MemGPT function calling. |
Heng666/Traditional_Chinese-aya_collection | ---
dataset_info:
- config_name: aya_dataset
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1703870
num_examples: 4909
download_size: 998499
dataset_size: 1703870
- config_name: templated_ntx_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 402016
num_examples: 256
download_size: 49658
dataset_size: 402016
- config_name: templated_uner_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 8259554
num_examples: 7992
- name: test
num_bytes: 2349655
num_examples: 1997
- name: validation
num_bytes: 1073059
num_examples: 998
download_size: 2001352
dataset_size: 11682268
- config_name: templated_xcsqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 297261
num_examples: 1000
download_size: 104307
dataset_size: 297261
- config_name: templated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 40044968
num_examples: 50949
- name: test
num_bytes: 5192254
num_examples: 6637
- name: validation
num_bytes: 4999716
num_examples: 6399
download_size: 31048776
dataset_size: 50236938
- config_name: templated_xwikis
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 147846215
num_examples: 31276
- name: test
num_bytes: 34828372
num_examples: 6998
- name: validation
num_bytes: 65882386
num_examples: 14998
download_size: 168472726
dataset_size: 248556973
- config_name: translated_adversarial_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 13844756
num_examples: 20000
- name: test
num_bytes: 1505596
num_examples: 2000
- name: validation
num_bytes: 1392335
num_examples: 2000
download_size: 6348021
dataset_size: 16742687
- config_name: translated_cnn_dailymail
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 284764459
num_examples: 200000
- name: test
num_bytes: 33310796
num_examples: 22980
- name: validation
num_bytes: 39146476
num_examples: 26736
download_size: 228280361
dataset_size: 357221731
- config_name: translated_dolly
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 21966259
num_examples: 29616
download_size: 12062356
dataset_size: 21966259
- config_name: translated_flan_coqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 26455946
num_examples: 12818
download_size: 16080989
dataset_size: 26455946
- config_name: translated_flan_cot
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 72401586
num_examples: 183820
download_size: 34564552
dataset_size: 72401586
- config_name: translated_flan_gem_wiki
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 84073165
num_examples: 54294
download_size: 52363999
dataset_size: 84073165
- config_name: translated_flan_lambada
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 2236958
num_examples: 8558
download_size: 1156298
dataset_size: 2236958
- config_name: translated_flan_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 355221
num_examples: 1080
download_size: 166753
dataset_size: 355221
- config_name: translated_hotpotqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 140638090
num_examples: 710952
download_size: 51704787
dataset_size: 140638090
- config_name: translated_joke_explaination
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 789677
num_examples: 1508
download_size: 343907
dataset_size: 789677
- config_name: translated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 5661977
num_examples: 28000
- name: test
num_bytes: 1608641
num_examples: 8000
- name: validation
num_bytes: 829827
num_examples: 4000
download_size: 1732447
dataset_size: 8100445
- config_name: translated_mlqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 36734954
num_examples: 46360
- name: validation
num_bytes: 3563390
num_examples: 4592
download_size: 21052230
dataset_size: 40298344
- config_name: translated_nqopen
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 56945959
num_examples: 351700
download_size: 19578926
dataset_size: 56945959
- config_name: translated_paws
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 34768722
num_examples: 98802
- name: test
num_bytes: 5625712
num_examples: 16000
- name: validation
num_bytes: 5721186
num_examples: 16000
download_size: 8291033
dataset_size: 46115620
- config_name: translated_piqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 11873740
num_examples: 32226
download_size: 4759498
dataset_size: 11873740
- config_name: translated_wikiqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 551856
num_examples: 2080
- name: test
num_bytes: 154223
num_examples: 586
- name: validation
num_bytes: 78316
num_examples: 280
download_size: 387555
dataset_size: 784395
configs:
- config_name: aya_dataset
data_files:
- split: train
path: aya_dataset/train-*
- config_name: templated_ntx_llm
data_files:
- split: train
path: templated_ntx_llm/train-*
- config_name: templated_uner_llm
data_files:
- split: train
path: templated_uner_llm/train-*
- split: test
path: templated_uner_llm/test-*
- split: validation
path: templated_uner_llm/validation-*
- config_name: templated_xcsqa
data_files:
- split: validation
path: templated_xcsqa/validation-*
- config_name: templated_xlel_wd
data_files:
- split: train
path: templated_xlel_wd/train-*
- split: test
path: templated_xlel_wd/test-*
- split: validation
path: templated_xlel_wd/validation-*
- config_name: templated_xwikis
data_files:
- split: train
path: templated_xwikis/train-*
- split: test
path: templated_xwikis/test-*
- split: validation
path: templated_xwikis/validation-*
- config_name: translated_adversarial_qa
data_files:
- split: train
path: translated_adversarial_qa/train-*
- split: test
path: translated_adversarial_qa/test-*
- split: validation
path: translated_adversarial_qa/validation-*
- config_name: translated_cnn_dailymail
data_files:
- split: train
path: translated_cnn_dailymail/train-*
- split: test
path: translated_cnn_dailymail/test-*
- split: validation
path: translated_cnn_dailymail/validation-*
- config_name: translated_dolly
data_files:
- split: train
path: translated_dolly/train-*
- config_name: translated_flan_coqa
data_files:
- split: train
path: translated_flan_coqa/train-*
- config_name: translated_flan_cot
data_files:
- split: train
path: translated_flan_cot/train-*
- config_name: translated_flan_gem_wiki
data_files:
- split: train
path: translated_flan_gem_wiki/train-*
- config_name: translated_flan_lambada
data_files:
- split: train
path: translated_flan_lambada/train-*
- config_name: translated_flan_qa
data_files:
- split: train
path: translated_flan_qa/train-*
- config_name: translated_hotpotqa
data_files:
- split: train
path: translated_hotpotqa/train-*
- config_name: translated_joke_explaination
data_files:
- split: train
path: translated_joke_explaination/train-*
- config_name: translated_mintaka
data_files:
- split: train
path: translated_mintaka/train-*
- split: test
path: translated_mintaka/test-*
- split: validation
path: translated_mintaka/validation-*
- config_name: translated_mlqa
data_files:
- split: test
path: translated_mlqa/test-*
- split: validation
path: translated_mlqa/validation-*
- config_name: translated_nqopen
data_files:
- split: train
path: translated_nqopen/train-*
- config_name: translated_paws
data_files:
- split: train
path: translated_paws/train-*
- split: test
path: translated_paws/test-*
- split: validation
path: translated_paws/validation-*
- config_name: translated_piqa
data_files:
- split: train
path: translated_piqa/train-*
- config_name: translated_wikiqa
data_files:
- split: train
path: translated_wikiqa/train-*
- split: test
path: translated_wikiqa/test-*
- split: validation
path: translated_wikiqa/validation-*
license: apache-2.0
task_categories:
- question-answering
- translation
- summarization
- zero-shot-classification
language:
- zh
pretty_name: ' Traditional_Chinese-aya_collection'
size_categories:
- 1M<n<10M
---
![Traditional_Chinese_Aya Header](https://huggingface.co/datasets/Heng666/Traditional_Chinese-aya_collection/resolve/main/Traditional_Chinese_Aya_header.jpeg)
<!-- Provide a quick summary of the dataset. -->
## 資料集描述
**繁體中文 Aya (Traditional Chinese Aya Chinese;TCA):專注於繁體中文處理的 Aya 集合的精選子集**
### 概述
`繁體中文 Aya` 是一個精心策劃的資料集,源自 [CohereForAI](https://huggingface.co/CohereForAI) 的綜合 Aya 集合,特別關注繁體中文文本資料。
此資料集結合了來自 [CohereForAI/aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),過濾掉除繁體中文、簡體中文內容之外的所有內容。
### 目標
`繁體中文 Aya` 的目標是為研究人員、技術專家和語言學家提供即用型繁體中文文本資源,顯著減少專注於繁體中文的 NLP 和 AI 專案中數據預處理所需的時間和精力。
### 資料集來源與資訊
- **資料來源**: 從 [CohereForAI/aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) 64 個子集而來。
- **語言**: 繁體中文、簡體中文('zho')
- **應用**: 非常適合語言建模、文本分類、情感分析、和機器翻譯等任務。
- **論文連結:** [2402.06619](https://huggingface.co/papers/2402.06619)
- **維護人:** [Heng666](https://huggingface.co/Heng666)
- **License:** Apache-2.0
### 使用方法
此資料集是開始繁體中文語言專案(從學術研究到商業應用)的基礎工具。
透過提供預先過濾的繁體中文文本來源,`繁體中文 Aya` 讓研究人員、技術專家和開發人員能夠直接進行模型訓練、分析和應用程式開發,而無需進行資料清理和語言過濾的初步麻煩。
展示範例
```python
from datasets import load_dataset
dataset = load_dataset("Heng666/Traditional_Chinese-aya_collection", "aya_dataset")
```
在上面的程式碼片段中,「aya_dataset」指的是原始 「aya_collection」中「aya_dataset」子集的繁體中文版本(100k行)。
您可以透過在載入資料集時指定其名稱來載入其他子集。
### 訪問和貢獻
可在 [Heng666/Traditional_Chinese-aya_collection](https://huggingface.co/datasets/Heng666/Traditional_Chinese-aya_collection) 下的 Hugging Face Hub 上獲取,
`繁體中文 Aya` 邀請社區做出貢獻。鼓勵用戶提供回饋、提出改進建議。
### 支持與合作
我們致力於圍繞繁體中文人工智慧和 NLP 研究創造一個包容和支持的環境。如需支援、協作或有關資料集的疑問,請透過 Hugging Face Hub 的討論部分進行聯絡。
# Original Dataset Card of Aya by CohereForAI
![Aya Header](https://huggingface.co/datasets/CohereForAI/aya_collection/resolve/main/aya_header.png)
# Dataset Summary
The Aya Collection is a massive multilingual collection consisting of 513 million instances of prompts and completions covering a wide range of tasks.
This collection incorporates instruction-style templates from fluent speakers and applies them to a curated list of datasets, as well as translations of instruction-style datasets into 101 languages. Aya Dataset, a human-curated multilingual instruction and response dataset, is also part of this collection. See our paper for more details regarding the collection.
- **Curated by:** Contributors of [Aya Open Science Intiative](https://cohere.com/research/aya)
- **Language(s):** 115 languages
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
- **Aya Datasets Family:**
| Name | Explanation |
|------|--------------|
| [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
| [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages.|
| [aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
# Dataset
The `Aya Collection` is a comprehensive, large corpus of datasets that can be used by researchers around the world to train multilingual models. Our goal is only to include datasets with permissive licensing for manipulation and redistribution.
The `Aya Collection` consists of three different sources of data:
1. Templated data: We collaborated with fluent speakers to create templates that allowed for the automatic expansion of existing datasets into various languages.
2. Translated data: We translated a hand-selected subset of 19 datasets into 101 languages (114 dialects) using the NLLB 3.3B parameter machine translation model.
3. Aya Dataset: We release the [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) as a subset of the overall collection. This is the only dataset in the collection that is human-annotated in its entirety.
## Load with Datasets
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("CohereForAI/aya_collection", "templated_mintaka")
```
In the above code snippet, "templated_mintaka" refers to a subset of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
## Data Instances
An example of a `train` instance looks as follows:
```json
{'id': 246001,
'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?',
'targets': 'The answer is Mount Lucania.',
'dataset_name': 'Mintaka-inst',
'sub_dataset_name': '-',
'task_type': 'question-answering',
'template_id': 3,
'language': 'eng',
'split': 'train',
'script': 'Latn'
}
```
## Data Fields
The data fields are the same among all splits:
- `id:` Unique id of the data point
- `inputs:` Prompt or input to the language model.
- `targets:` Completion or output of the language model.
- `dataset_name:` The name of the source dataset that the data point was taken from
- `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank.
- `task_type:` The task type that this conversation belongs to.
- `template_id`: The id of the template applied to this data point.
- `language:` The ISO code of the dialect of the conversation.
- `script:` The script of the language.
- `split:` Indicates whether the data point is part of the `train` or the `test` split.
### Statistics
The total number of data points, including the Aya Dataset` is 513,758,189. To view the breakdown of dialect codes and the respective templated and translated data point counts in the Aya Collection , refer to the toggled table below.
<details>
<summary> <b> Breakdown of Aya Collection data point counts grouped by dialects </b> </summary>
|dialect code|language|translated data point count|templated data point count|total count |
|------------|--------|---------------------------|--------------------------|---------------|
|ace |Achinese|8240684 |2000 |8242684 |
|acm |Arabic |4120342 |0 |4120342 |
|acq |Arabic |4120342 |0 |4120342 |
|aeb |Arabic |4120342 |0 |4120342 |
|afr |Afrikaans|4120342 |6108 |4126450 |
|ajp |Arabic |4120342 |0 |4120342 |
|als |Albanian|4120342 |0 |4120342 |
|amh |Amharic |4120342 |25327 |4145669 |
|apc |Arabic |4120342 |0 |4120342 |
|arb |Arabic |6424999 |216430 |6641429 |
|ars |Arabic |4120342 |0 |4120342 |
|ary |Arabic |4120342 |18076 |4138418 |
|arz |Arabic |4120342 |0 |4120342 |
|azb |Azerbaijani|4120342 |0 |4120342 |
|azj |Azerbaijani|4120342 |0 |4120342 |
|bel |Belarusian|4120342 |21273 |4141615 |
|ben |Bengali |4120342 |30661 |4151003 |
|bjn |Banjar |8240684 |2000 |8242684 |
|bul |Bulgarian|4120342 |37722 |4158064 |
|cat |Catalan |4120342 |66900 |4187242 |
|ceb |Cebuano |4120342 |0 |4120342 |
|ces |Czech |4120342 |179604 |4299946 |
|ckb |Kurdish |4120342 |0 |4120342 |
|cym |Welsh |4120342 |0 |4120342 |
|dan |Danish |4120342 |36310 |4156652 |
|deu |German |4120342 |1326722 |5447064 |
|ell |Greek |4120342 |40291 |4160633 |
|eng |English |9771427 |8066678 |17838105 |
|epo |Esperanto|4120342 |0 |4120342 |
|est |Estonian|4120342 |0 |4120342 |
|eus |Basque |4120342 |0 |4120342 |
|fin |Finnish |4120342 |457895 |4578237 |
|fra |French |4120342 |835520 |4955862 |
|gla |Scottish Gaelic|4120342 |0 |4120342 |
|gle |Irish |4120342 |0 |4120342 |
|glg |Galician|4120342 |0 |4120342 |
|guj |Gujarati|4120342 |2157 |4122499 |
|hat |Haitian Creole|4120342 |0 |4120342 |
|hau |Hausa |4120342 |51396 |4171738 |
|heb |Hebrew |4120342 |103466 |4223808 |
|hin |Hindi |4120342 |260387 |4380729 |
|hun |Hungarian|4120342 |82039 |4202381 |
|hye |Armenian|4120342 |7080 |4127422 |
|ibo |Igbo |4120342 |36312 |4156654 |
|ind |Indonesian|4120342 |45709 |4166051 |
|isl |Icelandic|4120342 |0 |4120342 |
|ita |Italian |4120342 |405682 |4526024 |
|jav |Javanese|4120342 |829 |4121171 |
|jpn |Japanese|4120342 |2693177 |6813519 |
|kan |Kannada |4120342 |1156 |4121498 |
|kas |Kashmiri|4120342 |0 |4120342 |
|kat |Georgian|4120342 |0 |4120342 |
|kaz |Kazakh |4120342 |0 |4120342 |
|khk |Mongolian|4120342 |0 |4120342 |
|khm |Khmer |4120342 |0 |4120342 |
|kir |Kyrgyz |4120342 |0 |4120342 |
|kmr |Kurdish |4120342 |0 |4120342 |
|knc |Kanuri |8240684 |0 |8240684 |
|kor |Korean |4120342 |41011 |4161353 |
|lao |Lao |4120342 |0 |4120342 |
|lit |Lithuanian|4120342 |0 |4120342 |
|ltz |Luxembourgish|4120342 |0 |4120342 |
|lvs |Latvian |4120342 |0 |4120342 |
|mal |Malayalam|4120342 |4347 |4124689 |
|mar |Marathi |4120342 |3678 |4124020 |
|min |Minangkabau|6753788 |2000 |6755788 |
|mkd |Macedonian|4120342 |0 |4120342 |
|mlt |Maltese |4120342 |0 |4120342 |
|mni |Manipuri|4120342 |0 |4120342 |
|mri |Maori |4120342 |0 |4120342 |
|mya |Burmese |4120342 |0 |4120342 |
|nld |Dutch |4120342 |220181 |4340523 |
|nno |Norwegian|4120342 |0 |4120342 |
|nob |Norwegian|4120342 |0 |4120342 |
|npi |Nepali |4120342 |0 |4120342 |
|nso |Northern Sotho|4120342 |0 |4120342 |
|pbt |Pashto |4120342 |0 |4120342 |
|pes |Persian |4120342 |245520 |4365862 |
|plt |Malagasy|4120342 |0 |4120342 |
|pol |Polish |4120342 |332503 |4452845 |
|por |Portuguese|4120342 |287432 |4407774 |
|ron |Romanian|4120342 |36359 |4156701 |
|rus |Russian |4120342 |545920 |4666262 |
|sin |Sinhala |4120342 |195 |4120537 |
|slk |Slovak |4120342 |27845 |4148187 |
|slv |Slovenian|4120342 |25731 |4146073 |
|smo |Samoan |4120342 |0 |4120342 |
|sna |Shona |4120342 |3684 |4124026 |
|snd |Sindhi |4120342 |0 |4120342 |
|som |Somali |4120342 |2926 |4123268 |
|sot |Southern Sotho|4120342 |0 |4120342 |
|spa |Spanish |4120342 |379194 |4499536 |
|srp |Serbian |4120342 |77124 |4197466 |
|sun |Sundanese|4120342 |2208 |4122550 |
|swe |Swedish |4120342 |76486 |4196828 |
|swh |Swahili |4120342 |12726 |4133068 |
|tam |Tamil |4120342 |11462 |4131804 |
|taq |Tamasheq|4120342 |0 |4120342 |
|tel |Telugu |4120342 |477821 |4598163 |
|tgk |Tajik |4120342 |0 |4120342 |
|tha |Thai |4120342 |2125180 |6245522 |
|tur |Turkish |4120342 |59932 |4180274 |
|ukr |Ukrainian|4120342 |189384 |4309726 |
|urd |Urdu |4120342 |337739 |4458081 |
|uzn |Uzbek |4120342 |0 |4120342 |
|vie |Vietnamese|4120342 |42232 |4162574 |
|xho |Xhosa |4120342 |2952 |4123294 |
|ydd |Yiddish |4120342 |0 |4120342 |
|yor |Yoruba |4120342 |4907 |4125249 |
|yue |Chinese |4120342 |0 |4120342 |
|zho-Hans |Chinese |4120342 |54528 |4174870 |
|zho-Hant |Chinese |4120342 |0 |4120342 |
|zsm |Malay |4120342 |13950 |4134292 |
|zul |Zulu |4120342 |786 |4121128 |
|arq |Arabic |0 |6046 |6046 |
|ban |Balinese|0 |2000 |2000 |
|bbc |Toba Batak|0 |2000 |2000 |
|bem |Bemba |0 |776 |776 |
|fil |Filipino|0 |220 |220 |
|fon |Fon |0 |845 |845 |
|hrv |Croatian|0 |9007 |9007 |
|kin |Kinyarwanda|0 |11165 |11165 |
|lij |Ligurian|0 |6409 |6409 |
|mad |Madurese|0 |2000 |2000 |
|nij |Ngaju |0 |2000 |2000 |
|nor |Norwegian|0 |72352 |72352 |
|pan |Punjabi |0 |2156 |2156 |
|twi |Twi |0 |10840 |10840 |
|wol |Wolof |0 |785 |785 |
|zho |Chinese |0 |74972 |74972 |
PS: Templated data also includes Mozambican Portuguese, which doesn't have its own ISO language code.
</details>
<br>
# Motivations & Intentions
- **Curation Rationale:** Automatic augmentation of existing datasets serves to enhance the available linguistic resources for multiple languages. The list of languages was initially established from mT5 and aligned with the annotators’ language list and NLLB translation model. The datasets were translated directly from English for all languages.
# Additional Information
## Provenance
- **Methods Used:** A combination of crowd-sourced templating and automatic translation was employed to source this dataset.
- **Methodology Details:**
- *Source:* Existing NLP datasets
- *Dates of Collection:* May 2023 - Dec 2023
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 02/2024
- *First Release:* 02/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
- **Contact Details:** https://cohere.com/research/aya
## Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Citation Information
```bibtex
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
0-hero/prompt-perfect-dpo | ---
tags:
- synthetic
- distillation
- GPT-4
- GPT-3.5
language:
- en
size_categories:
- 100K<n<1M
---
# DPO Version of [Prompt Perfect](https://huggingface.co/datasets/0-hero/prompt-perfect/)
## Update
#### 02-22-2024
- Noticed a correlation with the rejected_pair generation prompt (or scoring) where length of response (level of detail) is almost proportional to quality.
- Testing new prompts for a re-run where is quality is not directly proportional to length of response directly
- This might result in models that generate long responses
## All datasets have 4 additional columns
- accepted_pair - Original output or generated output (Score 4,5)
- rejected_pair - Original output or generated output (Score 1,2,3)
- generated - Generated Pair (accepted/rejected)
- generator_model - Model used to generate the pair
## How was the dataset created?
1. Score popular datasets on a scale 1-5 using `gpt-3.5-turbo` (Original Prompt Perfect Dataset)
2. Generate accepted_pairs (score 5) for rows with scores 1,2,3 using `gpt-4-0125-preview`
3. Generate rejected_pairs (score 2,1) for rows with score 4,5 using `gpt-3.5-turbo-0125`
## Datasets in this batch
- [airoboros-2.1](https://huggingface.co/datasets/jondurbin/airoboros-2.1)
- [alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4)
- [open-platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- [Synthia-1.3](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
## Score with ["Self-Alignment with Instruction Backtranslation"](https://arxiv.org/abs/2308.06259)
```
Below is an instruction from an user and a candidate answer. Evaluate whether or not the answer is a good example of how AI Assistant should respond to the user’s instruction. Please assign a score using the following 5-point scale:
1: It means the answer is incomplete, vague, off-topic, controversial, or not exactly what the user asked for. For example, some content seems missing, numbered list does not start from the beginning, the opening sentence repeats user’s question. Or the response is from another person’s perspective with their personal experience (e.g. taken from blog posts), or looks like an answer from a forum. Or it contains promotional text, navigation text, or other irrelevant information.
2: It means the answer addresses most of the asks from the user. It does not directly address the user’s question. For example, it only provides a high-level methodology instead of the exact solution to user’s question.
3: It means the answer is helpful but not written by an AI Assistant. It addresses all the basic asks from the user. It is complete and self contained with the drawback that the response is not written from an AI assistant’s perspective, but from other people’s perspective. The content looks like an excerpt from a blog post, web page, or web search results. For example, it contains personal experience or opinion, mentions comments section, or share on social media, etc.
4: It means the answer is written from an AI assistant’s perspective with a clear focus of addressing the instruction. It provide a complete, clear, and comprehensive response to user’s question or instruction without missing or irrelevant information. It is well organized, self-contained, and written in a helpful tone. It has minor room for improvement, e.g. more concise and focused.
5: It means it is a perfect answer from an AI Assistant. It has a clear focus on being a helpful AI Assistant, where the response looks like intentionally written to address the user’s question or instruction without any irrelevant sentences. The answer provides high quality content, demonstrating expert knowledge in the area, is very well written, logical, easy-to-follow, engaging and insightful.
``` |
m-a-p/MAP-CC | ---
non-commercial licence: CC BY-NC-ND 4.0
license: cc-by-nc-nd-4.0
---
# MAP-CC
[**🌐 Homepage**](https://chinese-tiny-llm.github.io) | [**🤗 MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**🤗 CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**🤗 CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**📖 arXiv**](https://arxiv.org/abs/2404.04167) | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM)
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
## Disclaimer
This model, developed for academic purposes, employs rigorously compliance-checked training data to uphold the highest standards of integrity and compliance. Despite our efforts, the inherent complexities of data and the broad spectrum of model applications prevent us from ensuring absolute accuracy or appropriateness of the model outputs in every scenario.
It is essential to highlight that our model and its associated training data are intended solely for scholarly research. We explicitly disclaim any liability for problems that may arise from improper use, interpretation errors, unlawful activities, the dissemination of false information, or any data security issues related to the utilization of our model or its training data.
We strongly encourage users to report any concerns related to data misuse, security breaches, or potential infringement issues directly to us for immediate investigation and resolution.
### Contact: {`ge.zhang@uwaterloo.ca; duxinrun2000@gmail.com`}
Our commitment to responsible data sharing and the security of our academic tools is paramount. We thank you for your cooperation in maintaining the ethical use of this technology.
## License
The MAP-CC Dataset is made available under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License ([CC BY-NC-ND 4.0](LICENSE)).
By using the MAP-CC Dataset, you accept and agree to be bound by the terms and conditions of the CC BY-NC-ND 4.0 License. This license allows users to share (copy and redistribute the material in any medium or format) the MAP-CC Dataset for non-commercial purposes only, and with no modifications or derivatives, as long as proper attribution is given to the creators. For further details, please refer to the [LICENSE](LICENSE) file.
We chose the CC BY-NC-ND 4.0 License for the MAP-CC Dataset to facilitate academic and educational use, promoting the spread of knowledge while protecting the work of the creators from unauthorized commercial use or modification.
## Usage Instructions
After downloading the parts of the dataset, you can concatenate them into a single file for each split of the dataset using the following command in a UNIX-like terminal:
```bash
cat [split].gz.part* > [split].gz
```
Replace [split] with the name of the dataset component you wish to merge (zh-cc, zh-baike, zh-papers, zh-books, or zh-others). After merging, decompress the .gz file to access the dataset's content.
## Dataset Composition
The dataset consists of several components, each originating from different sources and serving various purposes in language modeling and processing. Below is a brief overview of each component:
<p>
<img src="data-ratio.png" style="float: right; width: 400px; margin-left: 10px;">
<strong>zh-cc (Chinese Common Crawl)</strong><br>
Extracts from the Common Crawl project specifically filtered for Chinese content. This component is rich in diverse internet text, ranging from websites, blogs, news articles, and more.<br><br>
<strong>zh-baike (Chinese Encyclopedias)</strong><br>
A collection of articles from various Chinese encyclopedias, similar to Wikipedia but including other encyclopedic sources as well.<br><br>
<strong>zh-papers (Chinese Academic Papers)</strong><br>
This component consists of academic and research papers published in Chinese. It covers a wide range of disciplines and offers technical, domain-specific language.<br><br>
<strong>zh-books (Chinese Books)</strong><br>
Comprises texts extracted from books published in Chinese. This includes literature, non-fiction, textbooks, and more.<br><br>
<strong>zh-others</strong><br>
This category is a collection of miscellaneous texts, notably including a substantial amount of QA (Question and Answer) data, alongside a variety of other texts.<br>
</p>
## Citation
```
@misc{du2024chinese,
title={Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model},
author={Xinrun Du and Zhouliang Yu and Songyang Gao and Ding Pan and Yuyang Cheng and Ziyang Ma and Ruibin Yuan and Xingwei Qu and Jiaheng Liu and Tianyu Zheng and Xinchen Luo and Guorui Zhou and Binhang Yuan and Wenhu Chen and Jie Fu and Ge Zhang},
year={2024},
eprint={2404.04167},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
benschill/brain-tumor-collection | ---
license: pddl
---
|
jbpark0614/speechocean762 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: speaker_id_str
dtype: int64
- name: speaker_id
dtype: int64
- name: question_id
dtype: int64
- name: total_score
dtype: int64
- name: accuracy
dtype: int64
- name: completeness
dtype: float64
- name: fluency
dtype: int64
- name: prosodic
dtype: int64
- name: text
dtype: string
- name: audio
dtype: audio
- name: path
dtype: string
splits:
- name: test
num_bytes: 288402967.0
num_examples: 2500
- name: train
num_bytes: 290407029.0
num_examples: 2500
download_size: 0
dataset_size: 578809996.0
---
# Dataset Card for "speechocean762"
The datasets introduced in
- Zhang, Junbo, et al. "speechocean762: An open-source non-native english speech corpus for pronunciation assessment." arXiv preprint arXiv:2104.01378 (2021).
- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)
- The original full data link: https://github.com/jimbozhang/speechocean762
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lasha-nlp/CONDAQA |
---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
- crowdsourced
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: condaqa
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- negation
- reading comprehension
task_categories:
- question-answering
task_ids: []
---
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
## Dataset Description
- **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/AbhilashaRavichander/CondaQA)
- **Paper:** [https://arxiv.org/abs/2211.00295](https://arxiv.org/abs/2211.00295)
- **Point of Contact:** aravicha@andrew.cmu.edu
## Dataset Summary
Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
### Supported Tasks and Leaderboards
The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{"QuestionID": "q10",
"original cue": "rarely",
"PassageEditID": 0,
"original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
"SampleID": 5294,
"label": "YES",
"original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
"sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
"PassageID": 444,
"sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
}
```
### Data Fields
* `QuestionID`: unique ID for this question (might be asked for multiple passages)
* `original cue`: Negation cue that was used to select this passage from Wikipedia
* `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
* `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
* `SampleID`: unique ID for this passage-question pair
* `label`: answer
* `original sentence`: Sentence that contains the negated statement
* `sentence2`: question
* `PassageID`: unique ID for the Wikipedia passage
* `sentence1`: passage
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("condaqa", "train")
dev_set = load_dataset("condaqa", "dev")
test_set = load_dataset("condaqa", "test")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
1. The dataset should include a wide variety of negation cues, not just negative particles.
2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
### Source Data
From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
"We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend."
#### Initial Data Collection and Normalization
We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
#### Who are the source language producers?
Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
### Annotations
#### Annotation process
From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
Full details are in the paper.
#### Who are the annotators?
From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
### Personal and Sensitive Information
We expect that such information has already been redacted from Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
### Discussion of Biases
We are not aware of societal biases that are exhibited in this dataset.
### Other Known Limitations
From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
## Additional Information
### Dataset Curators
From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
### Licensing Information
license: apache-2.0
### Citation Information
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
``` |
jorgeortizfuentes/spanish_books | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: SpanishBooks
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 40822979419
num_examples: 87967
download_size: 25042031556
dataset_size: 40822979419
---
# Spanish Books
## Dataset Description
- **Total of books:** 87,967
### Dataset Summary
Dataset of books in Spanish crawled from web and torrents.
### Preprocessing
Preprocessing performed by [spanish_nlp](https://github.com/jorgeortizfuentes/spanish_nlp).
### Licensing Information
The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Some books may be subject to copyright. Use for academic purposes only.
### Citation Information
```
@misc{ortiz2022esbooks,
title={Crawled Spanish Books},
author={Jorge Ortiz-Fuentes},
year={2022},
publisher= {Hugging Face}
}
```
|
ruanchaves/b2w-reviews01 | ---
annotations_creators:
- found
language:
- pt
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: B2W-Reviews01
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- reviews
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- intent-classification
- topic-classification
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/americanas-tech/b2w-reviews01
- **Paper:** http://comissoes.sbc.org.br/ce-pln/stil2019/proceedings-stil-2019-Final-Publicacao.pdf
- **Point of Contact:** Livy Real
### Dataset Summary
B2W-Reviews01 is an open corpus of product reviews. It contains more than 130k e-commerce customer reviews, collected from the Americanas.com website between January and May, 2018. B2W-Reviews01 offers rich information about the reviewer profile, such as gender, age, and geographical location. The corpus also has two different review rates:
* the usual 5-point scale rate, represented by stars in most e-commerce websites,
* a "recommend to a friend" label, a "yes or no" question representing the willingness of the customer to recommend the product to someone else.
### Supported Tasks and Leaderboards
* Sentiment Analysis
* Topic Modeling
### Languages
* Portuguese
## Dataset Structure
### Data Instances
```
{'submission_date': '2018-01-02 06:23:22',
'reviewer_id': '6adc7901926fc1697d34181fbd88895976b4f3f31f0102d90217d248a1fad156',
'product_id': '123911277',
'product_name': 'Triciclo Gangorra Belfix Cabeça Cachorro Rosa',
'product_brand': 'belfix',
'site_category_lv1': 'Brinquedos',
'site_category_lv2': 'Mini Veículos',
'review_title': 'O produto não foi entregue',
'overall_rating': 1,
'recommend_to_a_friend': 'Yes',
'review_text': 'Incrível o descaso com o consumidor. O produto não chegou, apesar de já ter sido pago. Não recebo qualquer informação sobre onde se encontra o produto, ou qualquer compensação do vendedor. Não recomendo.',
'reviewer_birth_year': 1981,
'reviewer_gender': 'M',
'reviewer_state': 'RJ'}
```
### Data Fields
* **submission_date**: the date and time when the review was submitted. `"%Y-%m-%d %H:%M:%S"`.
* **reviewer_id**: a unique identifier for the reviewer.
* **product_id**: a unique identifier for the product being reviewed.
* **product_name**: the name of the product being reviewed.
* **product_brand**: the brand of the product being reviewed.
* **site_category_lv1**: the highest level category for the product on the site where the review is being submitted.
* **site_category_lv2**: the second level category for the product on the site where the review is being submitted.
* **review_title**: the title of the review.
* **overall_rating**: the overall star rating given by the reviewer on a scale of 1 to 5.
* **recommend_to_a_friend**: whether or not the reviewer would recommend the product to a friend (Yes/No).
* **review_text**: the full text of the review.
* **reviewer_birth_year**: the birth year of the reviewer.
* **reviewer_gender**: the gender of the reviewer (F/M).
* **reviewer_state**: the Brazilian state of the reviewer (e.g. RJ).
### Data Splits
| name |train|
|---------|----:|
|b2w-reviews01|132373|
### Citation Information
```
@inproceedings{real2019b2w,
title={B2W-reviews01: an open product reviews corpus},
author={Real, Livy and Oshiro, Marcio and Mafra, Alexandre},
booktitle={STIL-Symposium in Information and Human Language Technology},
year={2019}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. |
IlyaGusev/pikabu | ---
dataset_info:
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: text_markdown
dtype: string
- name: timestamp
dtype: uint64
- name: author_id
dtype: int64
- name: username
dtype: string
- name: rating
dtype: int64
- name: pluses
dtype: int64
- name: minuses
dtype: int64
- name: url
dtype: string
- name: tags
sequence: string
- name: blocks
sequence:
- name: data
dtype: string
- name: type
dtype: string
- name: comments
sequence:
- name: id
dtype: int64
- name: timestamp
dtype: uint64
- name: parent_id
dtype: int64
- name: text_markdown
dtype: string
- name: text_html
dtype: string
- name: images
sequence: string
- name: rating
dtype: int64
- name: pluses
dtype: int64
- name: minuses
dtype: int64
- name: author_id
dtype: int64
- name: username
dtype: string
splits:
- name: train
num_bytes: 96105803658
num_examples: 6907622
download_size: 20196853689
dataset_size: 96105803658
task_categories:
- text-generation
language:
- ru
size_categories:
- 1M<n<10M
---
# Pikabu dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of posts and comments from [pikabu.ru](https://pikabu.ru/), a website that is Russian Reddit/9gag.
**Script:** [convert_pikabu.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/convert_pikabu.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Mostly Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/pikabu', split="train", streaming=True)
for example in dataset:
print(example["text_markdown"])
```
## Data Instances
```
{
"id": 69911642,
"title": "Что можно купить в Китае за цену нового iPhone 11 Pro",
"text_markdown": "...",
"timestamp": 1571221527,
"author_id": 2900955,
"username": "chinatoday.ru",
"rating": -4,
"pluses": 9,
"minuses": 13,
"url": "...",
"tags": ["Китай", "AliExpress", "Бизнес"],
"blocks": {"data": ["...", "..."], "type": ["text", "text"]},
"comments": {
"id": [152116588, 152116426],
"text_markdown": ["...", "..."],
"text_html": ["...", "..."],
"images": [[], []],
"rating": [2, 0],
"pluses": [2, 0],
"minuses": [0, 0],
"author_id": [2104711, 2900955],
"username": ["FlyZombieFly", "chinatoday.ru"]
}
}
```
You can use this little helper to unflatten sequences:
```python
def revert_flattening(records):
fixed_records = []
for key, values in records.items():
if not fixed_records:
fixed_records = [{} for _ in range(len(values))]
for i, value in enumerate(values):
fixed_records[i][key] = value
return fixed_records
```
## Source Data
* The data source is the [Pikabu](https://pikabu.ru/) website.
* An original dump can be found here: [pikastat](https://pikastat.d3d.info/)
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/convert_pikabu.py).
## Personal and Sensitive Information
The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
|
andersonbcdefg/supernatural-instructions-2m | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1859403487.079275
num_examples: 1990915
download_size: 521457643
dataset_size: 1859403487.079275
---
# Dataset Card for "supernatural-instructions-2m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PaulAdversarial/all_news_finance_sm_1h2023 | ---
license: afl-3.0
---
|
commaai/commavq | ---
license: mit
size_categories:
- 100K<n<1M
---
# commaVQ
commaVQ is a dataset of 100,000 heavily compressed driving videos for Machine Learning research. A heavily compressed driving video like this is useful to experiment with GPT-like video prediction models. This repo includes an encoder/decoder and an example of a video prediction model.
Examples and trained models can be found here: https://github.com/commaai/commavq
# Overview
A VQ-VAE [1,2] was used to heavily compress each frame into 128 "tokens" of 10 bits each. Each entry of the dataset is a "segment" of compressed driving video, i.e. 1min of frames at 20 FPS. Each file is of shape 1200x8x16 and saved as int16.
Note that the compressor is extremely lossy on purpose. It makes the dataset smaller and easy to play with (train GPT with large context size, fast autoregressive generation, etc.). We might extend the dataset to a less lossy version when we see fit.
<video title="source" controls>
<source src="https://github.com/commaai/commavq/assets/29985433/91894bf7-592b-4204-b3f2-3e805984045c" type="video/mp4">
</video>
<video title="compressed" controls>
<source src="https://github.com/commaai/commavq/assets/29985433/3a799ac8-781e-461c-bf14-c15cea42b985" type="video/mp4">
</video>
<video title="imagined" controls>
<source src="https://github.com/commaai/commavq/assets/29985433/f6f7699b-b6cb-4f9c-80c9-8e00d75fbfae" type="video/mp4">
</video>
# References
[1] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems 30 (2017).
[2] Esser, Patrick, Robin Rombach, and Bjorn Ommer. "Taming transformers for high-resolution image synthesis." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. |
FreedomIntelligence/CMB | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- zh
tags:
- medical
- biology
- chemistry
size_categories:
- 100K<n<1M
configs:
- config_name: CMB-Clin
data_files:
- split: test
path: CMB-Clin/CMB-Clin-qa.json
- config_name: CMB-Exam
data_files:
- split: test
path: CMB-Exam/CMB-test/CMB-test-choice-question-merge.json
- split: train
path: CMB-Exam/CMB-train/CMB-train-merge.json
- split: val
path: CMB-Exam/CMB-val/CMB-val-merge.json
---
# CMB: A Comprehensive Medical Benchmark in Chinese
![CMB](assets/title.png)
<p align="center">
🌐 <a href="https://github.com/FreedomIntelligence/CMB" target="_blank">Github</a> • 🌐 <a href="https://cmedbenchmark.llmzoo.com/#home" target="_blank">Website</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/CMB" target="_blank">HuggingFace</a>
## 🌈 Update
* **[2024.02.21]** The [answers](https://github.com/FreedomIntelligence/CMB/tree/main/data) to the CMB-Exam test has been updated and some errors caused by omissions in version management have been fixed.
* **[2024.01.08]** In order to facilitate testing, we disclose the [answers](https://github.com/FreedomIntelligence/CMB/tree/main/data) to the CMB-Exam test
* **[2023.09.22]** CMB is included in [OpenCompass](https://github.com/open-compass/opencompass).
* **[2023.08.21]** [Paper](https://arxiv.org/abs/2308.08833) released.
* **[2023.08.01]** 🎉🎉🎉 CMB is published!🎉🎉🎉
## 🌐 Download Data
- (Recommended) Download the [zip file](https://github.com/FreedomIntelligence/CMB/tree/main/data) and unzip:
```bash
git clone "https://github.com/FreedomIntelligence/CMB.git" && cd CMB && unzip "./data/CMB.zip" -d "./data/" && rm "./data/CMB.zip"
```
- Or load our data as follows:
```python
from datasets import load_dataset
# CMB-Exam datasets (multiple-choice and multiple-answer questions)
exam_datasets = load_dataset('FreedomIntelligence/CMB','exam')
# CMB-Clin datasets
clin_datasets = load_dataset('FreedomIntelligence/CMB','clin')
```
## 🥇 Leaderboard
Please Check [Leaderboard](https://cmedbenchmark.llmzoo.com/static/leaderboard.html).
## 🥸 Dataset intro
![CMB](assets/CMB-2.svg)
### Components
- CMB-Exam: Comprehensive multi-level assessment for medical knowledge
- Structure: 6 major categories and 28 subcategories, [View Catalog](catalog.md)
- CMB-test: 400 questions per subcategories, 11200 questions in total
- CMB-val: 280 questions with solutions and explanations; used as source for CoT and few-shot
- CMB-train: 269359 questions for medical knowledge injection
- CMB-Clin: 74 cases of complex medical inquires
### CMB-Exam Item
```json
{
"exam_type": "医师考试",
"exam_class": "执业医师",
"exam_subject": "口腔执业医师",
"question": "患者,男性,11岁。近2个月来时有低热(37~38℃),全身无明显症状。查体无明显阳性体征。X线检查发现右肺中部有一直径约0.8cm类圆形病灶,边缘稍模糊,肺门淋巴结肿大。此男孩可能患",
"answer": "D",
"question_type": "单项选择题",
"option": {
"A": "小叶型肺炎",
"B": "浸润性肺结核",
"C": "继发性肺结核",
"D": "原发性肺结核",
"E": "粟粒型肺结核"
}
},
```
- exam_type: major category
- exam_class: sub-category
- exam_subject: Specific departments or subdivisions of disciplines
- question_type: *multiple-choice (单项选择题)* or *multiple-answer (多项选择题)*
### CMB-Clin Item
```json
{
"id": 0,
"title": "案例分析-腹外疝",
"description": "现病史\n(1)病史摘要\n 病人,男,49岁,3小时前解大便后出现右下腹疼痛,右下腹可触及一包块,既往体健。\n(2)主诉\n 右下腹痛并自扪及包块3小时。\n\n体格检查\n体温: T 37.8℃,P 101次/分,呼吸22次/分,BP 100/60mmHg,腹软,未见胃肠型蠕动波,肝脾肋下未及,于右侧腹股沟区可扪及一圆形肿块,约4cm×4cm大小,有压痛、界欠清,且肿块位于腹股沟韧带上内方。\n\n辅助检查\n(1)实验室检查\n 血常规:WBC 5.0×109/L,N 78%。\n 尿常规正常。\n(2)多普勒超声检查\n 沿腹股沟纵切可见一多层分布的混合回声区,宽窄不等,远端膨大,边界整齐,长约4~5cm。\n(3)腹部X线检查\n 可见阶梯状液气平。",
"QA_pairs": [
{
"question": "简述该病人的诊断及诊断依据。",
"solution": "诊断:嵌顿性腹股沟斜疝合并肠梗阻。\n诊断依据:\n①右下腹痛并自扪及包块3小时;\n②有腹胀、呕吐,类似肠梗阻表现;腹部平片可见阶梯状液平,考虑肠梗阻可能;腹部B超考虑,\n腹部包块内可能为肠管可能;\n③有轻度毒性反应或是中毒反应,如 T 37.8℃,P 101次/分,白细胞中性分类78%;\n④腹股沟区包块位于腹股沟韧带上内方。"
},
{
"question": "简述该病人的鉴别诊断。",
"solution": "(1)睾丸鞘膜积液:鞘膜积液所呈现的肿块完全局限在阴囊内,其上界可以清楚地摸到;用透光试验检查肿块,鞘膜积液多为透光(阳性),而疝块则不能透光。\n(2)交通性鞘膜积液:肿块的外形与睾丸鞘膜积液相似。于每日起床后或站立活动时肿块缓慢地出现并增大。平卧或睡觉后肿块逐渐缩小,挤压肿块,其体积也可逐渐缩小。透光试验为阳性。\n(3)精索鞘膜积液:肿块较小,在腹股沟管内,牵拉同侧睾丸可见肿块移动。\n(4)隐睾:腹股沟管内下降不全的睾丸可被误诊为斜疝或精索鞘膜积液。隐睾肿块较小,挤压时可出现特有的胀痛感觉。如患侧阴囊内睾丸缺如,则诊断更为明确。\n(5)急性肠梗阻:肠管被嵌顿的疝可伴发急性肠梗阻,但不应仅满足于肠梗阻的诊断而忽略疝的存在;尤其是病人比较肥胖或疝块较小时,更易发生这类问题而导致治疗上的错误。\n(6)此外,腹股沟区肿块还应与以下疾病鉴别:肿大的淋巴结、动(静)脉瘤、软组织肿瘤、脓肿、\n圆韧带囊肿、子宫内膜异位症等。"
},
{
"question": "简述该病人的治疗原则。",
"solution": "嵌顿性疝原则上需要紧急手术治疗,以防止疝内容物坏死并解除伴发的肠梗阻。术前应做好必要的准备,如有脱水和电解质紊乱,应迅速补液加以纠正。手术的关键在于正确判断疝内容物的活力,然后根据病情确定处理方法。在扩张或切开疝环、解除疝环压迫的前提下,凡肠管呈紫黑色,失去光泽和弹性,刺激后无蠕动和相应肠系膜内无动脉搏动者,即可判定为肠坏死。如肠管尚未坏死,则可将其送回腹腔,按一般易复性疝处理,即行疝囊高位结扎+疝修补术。如肠管确已坏死或一时不能肯定肠管是否已失去活力时,则应在病人全身情况允许的前提下,切除该段肠管并进行一期吻合。凡施行肠切除吻合术的病人,因手术区污染,在高位结扎疝囊后,一般不宜作疝修补术,以免因感染而致修补失败。"
}
]
},
```
- title: name of disease
- description: information of patient
- QA_pairs: a series of questions and their solutions based on the description
## ℹ️ How to evaluate and submit refer to [link](https://github.com/FreedomIntelligence/CMB)
## 😘 Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{cmedbenchmark,
title={CMB: Chinese Medical Benchmark},
author={Xidong Wang*, Guiming Hardy Chen*, Dingjie Song*, Zhiyi Zhang*, Qingying Xiao, Xiangbo Wu, Feng Jiang, Jianquan Li, Benyou Wang},
note={Xidong Wang, Guiming Hardy Chen, Dingjie Song, and Zhiyi Zhang contributed equally to this github repo.},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/FreedomIntelligence/CMB}},
}
```
## Acknowledgement
- We thank [Shenzhen Research Institute of Big Data](http://www.sribd.cn/) for their enormous support for this project.
- We thank the following doctors for participating in the human evaluation of CMB-Clin:
- 林士军 (香港中文大学(深圳)附属第二医院)
- 常河
- 许晓爽
|
lamini/open_llms | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 914763.8050314465
num_examples: 1001
- name: test
num_bytes: 102351.19496855346
num_examples: 112
download_size: 184863
dataset_size: 1017115.0
---
# Dataset Card for "open_llms"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
morpheuslord/cve-llm-training | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: CVE-LLm-Dataset
data_source: Custom data collected from the CVE database
data_formats: JSONL
---
# CVE-llm_dataset
This dataset is intended to train an LLM model for an utterly CVE-focused input and output.
## Data extraction:
For the data extraction, I first downloaded the CVE database from NVD lists and then loaded them using the `cve_dataset_2.py` and `cve_dataset.py` both have produce different datasets one is for llama and the other is for openai GPT.
The CVE json files are mapped in this format:
```
cves:
|
├─1999
| ├─0xxx
| | ├─CVE-1999-0001.json
| | ├─....
| | └─CVE-1999-0999.json
| └─1xxx
| ├─CVE-1999-1000.json
| ├─....
| └─CVE-1999-1598.json
└─2023
```
The programs traverse through these folders extract the data in the files and arrange them into usable formats for the fine-tuning process.
## llama2 Model dataset:
The llama2 fine-tuned dataset follows this format:
```
{
"instruction": "Explain CVE-1999-0001",
"input": "Explain the vulnerability: CVE-1999-0001",
"output": "ip_input.c in BSD-derived TCP/IP implementations allows remote attackers to cause a denial of service (crash or hang) via crafted packets.\nAffected Products: n/a\nReferences: [{'tags': ['x_refsource_CONFIRM'], 'url': 'http://www.openbsd.org/errata23.html#tcpfix'}, {'name': '5707', 'tags': ['vdb-entry', 'x_refsource_OSVDB'], 'url': 'http://www.osvdb.org/5707'}]\nCVE State: PUBLISHED"
}
```
The instruction is what we instruct the AI to do with the data provided For example we can command the AI `To take in user input analyze it and then based on what he asks returns an answer` This is also where we can add a `role` or a `personal` to the AI.
The input is the user Input of the main query or data that must be processed by the AI. This is a crucial piece of information that the AI will process in order to provide an output.
The output is the format that we define and tell the AI to generate answers in that format or provide that answer to the question asked. |
benmainbird/prompt_answers_v1 | ---
language:
- en
pretty_name: "Open Prompt LLM Answers"
tags:
- llm
- prompts
- answers
---
# Dataset Card for Open Prompt Answers
## Dataset Summary
This dataset provides answers from different Large Language models to prompts from several public datasets.
+ `prompt`: a prompt from an open-source dataset
+ `prompt_origin`: the dataset the prompt is taken from
+ `Llama-2-7b-chat-hf_output`: output generation of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) model
+ `Llama-2-7b-chat-hf_generation_time`: generation duration *in seconds* for the answer of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) model
+ `oasst-sft-4-pythia-12b_output`: output generation of [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) model
+ `oasst-sft-4-pythia-12b_generation_time`: generation duration *in seconds* for the answer of [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) model
+ `vicuna-7b-v1.5_output`: output generation of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) model
+ `vicuna-7b-v1.5_generation_time`: generation duration *in seconds* for the answer of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) model
## Prompt Sources
The prompts are a subset of all prompts of the following datasets:
+ [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1): only english prompts with no previous conversation tree (`role = prompter` and `parent_id = null`)
+ [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf): only the initial input of the *Human* as prompt
+ [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca): concatenated `instruction` and `input` to form prompt
+ [Dahoas/synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise): prompts from `prompt` column
## Output Generation
The configuration is the same for each model:
+ `temperature`: 0.7
+ `max_new_tokens`: 512
+ `repetition_penalty`: 1.0
The generation duration is provided (in seconds). |
b3x0m/Chinese-H-Novels | ---
task_categories:
- text-classification
- summarization
- token-classification
- text2text-generation
language:
- zh
tags:
- not-for-all-audiences
- art
pretty_name: H-novel-corpus
size_categories:
- 10B<n<100B
---
Chinese 18+ novels corpus, use at your own risk, you and only you are responsible for every choice you make.
tags: socks, garter belt, foot fetish, ntr, netori..... |
meta-math/GSM8K_zh | ---
license: mit
task_categories:
- question-answering
language:
- en
- zh
tags:
- math
- math-qa
- chinese-math-qa
size_categories:
- n<1K
---
# Dataset
`GSM8K_zh` is a dataset for mathematical reasoning in Chinese, question-answer pairs are translated from GSM8K (https://github.com/openai/grade-school-math/tree/master) by `GPT-3.5-Turbo` with few-shot prompting.
The dataset consists of 7473 training samples and 1319 testing samples. The former is for **supervised fine-tuning**, while the latter is for **evaluation**.
for training samples, `question_zh` and `answer_zh` are question and answer keys, respectively;
for testing samples, only the translated questions are provided (`question_zh`).
# Citation
If you find the `GSM8K_zh` dataset useful for your projects/papers, please cite the following paper.
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` |
ajibawa-2023/Code-74k-ShareGPT | ---
license: cc-by-nc-nd-4.0
task_categories:
- conversational
language:
- en
tags:
- code
size_categories:
- 10K<n<100K
---
**Code-74k-ShareGPT**
This dataset is in Vicuna/ShareGPT format. There are around 74000 set of conversations. Each set having 2 conversations.
Along with Python, Java, JavaScript, GO, C++, Rust etc. code with detailed explanation are provided. It is built upon using my existing Dataset [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT).
Additional dataset was generated using GPT-3.5, GPT-4 etc. |
zefang-liu/secqa | ---
license: cc-by-nc-sa-4.0
task_categories:
- multiple-choice
language:
- en
tags:
- security
size_categories:
- n<1K
configs:
- config_name: secqa_v1
data_files:
- split: dev
path: "data/secqa_v1_dev.csv"
- split: val
path: "data/secqa_v1_val.csv"
- split: test
path: "data/secqa_v1_test.csv"
- config_name: secqa_v2
data_files:
- split: dev
path: "data/secqa_v2_dev.csv"
- split: val
path: "data/secqa_v2_val.csv"
- split: test
path: "data/secqa_v2_test.csv"
---
# SecQA
<!-- Provide a quick summary of the dataset. -->
SecQA is a specialized dataset created for the evaluation of Large Language Models (LLMs) in the domain of computer security.
It consists of multiple-choice questions, generated using GPT-4 and the
[Computer Systems Security: Planning for Success](https://web.njit.edu/~rt494/security/) textbook,
aimed at assessing the understanding and application of LLMs' knowledge in computer security.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
SecQA is an innovative dataset designed to benchmark the performance of Large Language Models (LLMs) in the field of computer security.
It contains a series of multiple-choice questions generated by GPT-4, based on the content from the textbook
[Computer Systems Security: Planning for Success](https://web.njit.edu/~rt494/security/).
The dataset is structured into two versions, v1 and v2, with v2 presenting a higher level of difficulty.
This design allows for a preliminary evaluation of LLMs across different levels of complexity
in understanding and applying computer security principles.
The dataset aims to provide a unique resource for researchers and developers to gauge the capabilities of LLMs
in this domain that is critical to modern digital infrastructures.
- **Curated by:** [Zefang Liu](https://www.linkedin.com/in/zefang-liu/)
- **Language(s) (NLP):** English
- **License:** [CC BY-NC-SA 4.0 DEED](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [SecQA](https://huggingface.co/datasets/zefang-liu/secqa)
- **Book:** [Computer Systems Security: Planning for Success](https://web.njit.edu/~rt494/security/)
- **Paper:** [SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security](https://arxiv.org/abs/2312.15838)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The primary application of SecQA is to serve as a benchmark for testing and evaluating
the capabilities of LLMs in the domain of computer security.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
The SecQA dataset is primarily intended for evaluating and benchmarking the performance of Large Language Models (LLMs)
in understanding and applying principles of computer security.
It's suitable for academic research, development of AI in cybersecurity education,
and testing the ability of models to interpret and respond to security-related scenarios.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
SecQA is not designed for and should not be used as a sole resource for real-world cybersecurity decision-making or incident response.
Its use is also inappropriate for training models for unethical purposes, such as hacking or creating security exploits.
Additionally, the dataset should not be considered comprehensive for all aspects of computer security,
and thus, it's not suitable for scenarios requiring broad or up-to-date industry knowledge.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
SecQA is structured into two versions, v1 and v2. Version 1 (v1) serves as the foundational level,
while version 2 (v2) presents a more advanced challenge, catering to a higher degree of difficulty in the questions posed.
Each version is composed of multiple-choice questions that are closely aligned with different learning objectives
within the field of computer security.
Each question in the dataset offers four answer choices, with only one being the correct answer.
To ensure fairness and eliminate any bias in question design, the answer choices have been carefully shuffled.
This shuffling not only contributes to a balanced distribution of answers
but also enhances the dataset’s effectiveness in evaluating the nuanced understanding and reasoning capabilities
of Large Language Models in computer security scenarios.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was created to fill a gap in assessing the understanding and application of computer security concepts by LLMs.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The questions were generated by GPT-4, leveraging content from the textbook "Computer Systems Security: Planning for Success"
under the guidance of researchers.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
The source data is produced by a collaboration between GPT-4 and researchers, utilizing the aforementioned textbook.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The SecQA dataset, though valuable for evaluating LLMs in computer security,
has limitations due to potential content biases from its source material and GPT-4 processing,
a narrow focus on computer security that may not extend to broader cybersecurity contexts,
and varying levels of difficulty across versions that could affect model assessment fairness.
Additionally, the shuffling of answer choices, while promoting balance, might introduce patterns exploitable by sophisticated models.
Given the rapid evolution of the field, some aspects of the dataset may quickly become outdated,
and there is a risk of misuse for purposes like security manipulation.
These factors should be carefully considered in research and application contexts.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{liu2023secqa,
title={SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security},
author={Zefang Liu},
year={2023},
eprint={2312.15838},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**APA:**
Zefang Liu. (2023). SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security.
## Dataset Card Contact
For inquiries or further information about the SecQA dataset,
please contact [Zefang Liu](https://www.linkedin.com/in/zefang-liu/). |
unalignment/comedy-snippets-v0.1 | ---
license: cc-by-nc-4.0
---
A very small sampling of snippets of comedy routines by George Carlin and Tom Segura. |
paraloq/json_data_extraction | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- json
- data-extraction
- structured-generation
- restricted-generation
- ecommerce
- medical
- manufacturing
- server logs
- news
pretty_name: Diverse Restricted JSON Data Extraction
size_categories:
- 1K<n<10K
---
# Diverse Restricted JSON Data Extraction
- **Curated by:** The [paraloq analytics](https://www.paraloq.ai) team.
## Uses
1. **Benchmark** restricted JSON data extraction (text + JSON schema -> JSON instance)
2. **Fine-Tune** data extraction model (text + JSON schema -> JSON instance)
3. **Fine-Tune** JSON schema Retrieval model (text -> retriever -> most adequate JSON schema)
### Out-of-Scope Use
Intended for research purposes only.
## Dataset Structure
The data comes with the following fields:
- **title**: The title of the schema.
- **topic**: The general topic of the item. For a list of topics, see below.
- **schema**: The JSON schema specifying the structure of the data.
- **item**: A JSON instance of the schema holding actual data.
- **medium**: The medium of the example data. Examples inlcude "news article", "blog post", "email", "html web page", "conversation", etc.
- **text**: An instance of the given medium, containing all the information held by the item, along with additional information.
A focus of this dataset is to provide a diverse set of items from a wide array of topics. We currently include the following topic areas:
- **simple**: Simple, general, documents such as to-do lists, calendars, recipes, etc. This is the most generic topic and is designed to be easy to exract.
- **medical**: Medical documents such as patient records, prescriptions, test results, etc.
- **ecommerce**: Ecommerce documents such as product listings, shopping carts, order confirmations, etc.
- **business**: Business documents such as invoices, purchase orders, contracts, etc.
- **travel**: Travel documents such as flight bookings, hotel reservations, itineraries, etc.
- **media**: Media documents such as movie reviews, music albums, video games, etc.
- **technology**: Technology documents such as software licenses, API responses, error logs, etc.
- **manufacturing**: Manufacturing documents such as product BOMs, work orders, inspection reports, COAs etc.
## Dataset Creation
### Curation Rationale
We use this dataset to benchmark different models for their ability to extract data from unstructured text in a zero shot fashion, by including the desired JSON schema in the prompt.
The dataset can also be used to fine-tune a model to extract data in a zero-shot manner, feeding text and a target JSON schema. Note that the difficulty here is typically not that the model output is not adhering to the desired JSON schema. This can be enforced by restricing generation using [guidance](https://github.com/guidance-ai/guidance) or [outlines](https://github.com/outlines-dev/outlines). For us, the issue is often that a model would not extract all of the available data.
### Source Data
This data is synthetically generated using Google's Gemini-Pro.
#### Data Collection and Processing
1. Prompt the model to generate a list of JSON schemas representing a diverse set of items.
2. Prompt the model to create instances from each of the schemas.
3. Prompt the model to generate text (in the form of a blog post, server logs, emails, chats, etc.) that contains the information held by the instance.
#### Who are the source data producers?
paraloq analytics is an Austrian AI research and development company based in Vienna.
## Bias, Risks, and Limitations
The data might include biases resulting from the sampling and bias propagation from Google's Gemini-Pro.
## Dataset Card Authors
Max Arrich
|
fblgit/simple-math-DPO | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 313485868.75
num_examples: 760000
- name: test
num_bytes: 16499256.25
num_examples: 40000
download_size: 101158122
dataset_size: 329985125.0
license: cc-by-nc-nd-4.0
task_categories:
- conversational
- reinforcement-learning
tags:
- math
- simple-math
pretty_name: Simple Math (DPO)
size_categories:
- 100K<n<1M
---
# Simple Math: 2+2=4 -1=3 (LoLo: Learning Only Logical Operations) DPO Pairs
Just like my teacher gave me homework, i thought maybe we can also add some of these basics on the trainings of our models.
It was created with very simple code that is in the repo, if you add more complex operations and so.. **please share the code** :D thank you
Current Code Version: 20240127.fblgit (A modification over @win10 for progressive and DPO operation)
![LoLo: Learning Only Logical Operations](https://huggingface.co/datasets/fblgit/simple-math/resolve/main/LOLO.png)
## Versions
```
27.01.24 First DPO Generator
```
## Citations
If you use Simple Math o train your model, please cite on the modelcard or the paper.
```
@misc{simplemath,
title={Simple-Math: 2+2=4 4-1=3},
author={Xavier Murias},
year={2024},
publisher = {Juanako.AI},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/fblgit/simple-math}},
}
``` |
sc890/DEEPFRUlT_DATASET | ---
language:
- en
license: apache-2.0
size_categories:
- 100M<n<1B
task_categories:
- feature-extraction
- text-classification
tags:
- biomedical
- imaging
- computer vision
- tuberculosis
- multimodal
dataset_info:
features:
- name: image_name
dtype: string
- name: image_id
dtype: string
- name: number
dtype: string
- name: image_path
dtype: string
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 1229202
num_examples: 10689
- name: test
num_bytes: 306617
num_examples: 2694
download_size: 42809832
dataset_size: 70088819.588
configs:
- config_name: default
data_files:
- split: train
path: data/train-data-*
- split: test
path: data/test-data-*
---
# DeepFruit Dataset
<!--The dataset is from Mendeley, comprises 21,122 images of 20 diverse fruit types across 8 different combinations and 2 csv files. -->
## Dataset Details
This dataset contains total of 21,122 fully labeled images, featuring 20 different kinds of fruits. It is structured into an 80% training set (16,899 images) and a 20% testing set (4,223 images), facilitating a ready-to-use framework for model training and evaluation.
Additionally, there are two CSV files that label the types of fruits depicted in each image.
### Dataset Description
The "DeepFruit" dataset is a comprehensive collection designed for the advancement of research in fruit detection, recognition, and classification. It encompasses a wide array of applications, including but not limited to, fruit recognition systems and calorie estimation. A total of 21,122 fully labeled images, featuring 20 different kinds of fruits. It is structured into an 80% training set (16,899 images) and a 20% testing set (4,223 images), facilitating a ready-to-use framework for model training and evaluation. This dataset provides a valuable resource for researchers aiming to develop automated systems leveraging deep learning, computer vision, and machine learning techniques for fruit image analysis.
- **Language(s):** en
- **License:** Mendeley License: CC BY 4.0
### Dataset Sources
Data: https://data.mendeley.com/datasets/5prc54r4rt/1
Paper: https://www.sciencedirect.com/science/article/pii/S2352340923006248#sec0003
## Uses
Convert Fruit Dataset From Image to PIL.
### Direct Use
This section describes suitable use cases for the dataset.
## Dataset Structure
"Train" & "Test": Datasets
"image_id": datasets.Value("string")
"number" - folder number:datasets.Value("int32")
"image": datasets.Image()
"image_path": datasets.Value("string")
"label": datasets.Value("string")
### Curation Rationale
It lies in its foundational role for enabling advanced machine learning applications in dietary and health management. By converting fruit images to the PIL format, it prepares data for analysis that could lead to innovations in recognizing and understanding fruit characteristics. This groundwork is crucial for developing technologies that assist in dietary planning, nutritional education, and managing health conditions through better food choices, thereby having a broad positive effect on public health and awareness.
#### Data Collection and Processing
Image Format: All images are expected to be in JPEG format. Non-JPEG files are excluded during the data processing phase, ensuring consistency in file format.
Label Extraction: Labels are extracted from separate CSV files (Labels_Train.csv and Labels_Test.csv), which map image names to their corresponding fruit labels. This method ensures that labels are organized and accessible.
Data Splitting: The dataset is split into training and testing sets, as indicated by the separate ZIP files for train and test data. This standard practice facilitates the evaluation of model performance on unseen data.
Python Imaging Library (PIL): Used for opening and manipulating images in the Python Imaging Library format. This choice is made for its wide adoption and ease of integration with other Python libraries for data science and machine learning tasks.
Datasets Library from Hugging Face: Facilitates the creation, distribution, and loading of the dataset. This library provides a standardized way to work with datasets, including features for splitting, processing, and accessing dataset information.
#### Supported Tasks
The fruit images were captured under various conditions, including different plate sizes, shapes, and situations, as well as varying angles, brightness levels, and distances.
1. Foundation for Advanced ML Models/ Algorithms Training: By converting the fruit dataset into PIL format, we ensure that the data is in a uniform, accessible format that is compatible with various machine learning and deep learning libraries. This standardization is vital for the efficient training, validation, and testing of different classification models.
2. Enables Comprehensive Analysis: The dataset, featuring a wide variety of fruit images, is essential for developing a deep understanding of fruit characteristics. This includes not only basic identification but also detailed analyses such as sugar content, calorie count, and vitamin composition, which are crucial for dietary planning and health management.
3. Basis for Practical Applications: The dataset's conversion and subsequent use in machine learning model training are not academic exercises but are intended for real-world applications. The insights gained from this project could significantly impact dietary planning, particularly for individuals with specific health considerations like diabetes, by providing accurate, detailed information about fruit characteristics.
## Bias, Risks, and Limitations
Representation Bias: Given the dataset comprises 20 diverse fruit types across 8 combinations, there might be an underrepresentation of certain fruits, particularly those that are less common or indigenous to specific regions. This could lead to a model trained on this dataset performing less accurately on fruit types or varieties not included or underrepresented.
Misclassification Risk: In critical applications where accurate fruit identification is crucial (e.g., dietary management apps, agricultural sorting mechanisms), misclassification could lead to adverse outcomes. This risk is heightened if the dataset contains mislabeled examples or if the model struggles with fruits that have similar appearances.
Scope of Application: The dataset's utility is primarily confined to the domain of fruit recognition and classification. It may not be suitable for more nuanced tasks within agricultural technology, such as detecting fruit diseases or assessing ripeness, unless supplemented with additional, specialized data. |
alfredplpl/wikipedia-qa-ja-1m | ---
language:
- ja
license: cc-by-sa-3.0
size_categories:
- 1M<n<10M
task_categories:
- question-answering
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 381548920
num_examples: 1395760
download_size: 0
dataset_size: 381548920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-qa-ja-1m"
# Original Dataset
- hpprc/wikipedia-20240101
# Procedure
- Extract the first line of the title from the dataset.
- Generate the answer by summizing the line using LLM:
- Input RAG-like prompt to CALM 2 7B Chat.
- Format the response.
# RAG-like Prompt
```python
f"""USER: {title}とはなんですか?次の文章を参考に一言でまとめてください。{text}
ASSISTANT: """
``` |
unicamp-dl/quati | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- pt
size_categories:
- 1M<n<10M
---
# Quati Information Retrieval Dataset
Quati dataset aims to support Brazilian Portuguese (pt-br) Information Retrieval (IR) systems development, providing document passagens originally created in pt-br, as well as queries (topics) created by native speakers.
## License
Quati is licensed under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
## Citation Information
```
@misc{bueno2024quati,
title={Quati: A Brazilian Portuguese Information Retrieval Dataset from Native Speakers},
author={Mirelle Bueno and Eduardo Seiti de Oliveira and Rodrigo Nogueira and Roberto A. Lotufo and Jayr Alencar Pereira},
year={2024},
eprint={2404.06976},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
## About Quati and how to use
Quati dataset is currently released in two versions: one with 1 million passages, and a larger one with 10 million passages. So far we have prepared only validation *qrels* for both versions, annotating 50 topics with an average of 97.78 passages per query on the 10M version, and 38.66 passages per query on the 1M version.
Quati can be used to evaluate any Information Retrieval system target Brazilian Portuguese Language documents. The dataset creation and annotation pipeline can also be used to further expand the passages annotation, or to create other IR datasets targeting specific Languages. Please refer to [our publication](https://arxiv.org/abs/2404.06976) for further details about the dataset and its creation process.
### Obtaining the 1M dataset version
```python
>>> quati_1M_passages = load_dataset("unicamp-dl/quati", "quati_1M_passages")
>>> quati_1M_passages['quati_1M_passages'][0]
{'passage_id': 'clueweb22-pt0000-00-00003_1',
'passage': 'Se você precisar de ajuda, visite o website nacional sobre a COVID-19 ou ligue para a linha de apoio à COVID-19 808 24 24 24 Perguntas mais frequentes Posso viajar entre Sintra e Cascais? Quais são as restrições de viagem em Cascais? Qual o número de telefone de apoio para a COVID 19 em Cascais? Preciso utilizar máscara facial no transporte público em Cascais? A prática do distanciamento social é compulsória em Cascais? O que eu devo fazer caso apresente sintomas da COVID-19 quando chegar em Cascais? Última atualização: 25 Abr 2022 Aplicam-se exceções, para detalhes completos: European Union. Estamos trabalhando ininterruptamente para lhe trazer as últimas informações de viagem relacionadas à COVID-19. Esta informação é compilada a partir de fontes oficiais. Ao melhor de nosso conhecimento, está correta de acordo com a última atualização. Visite Avisos de Viagem Rome2rio para ajuda geral. Perguntas & Respostas Qual a maneira mais econômica de ir de Sintra para Cascais? Qual a maneira mais rápida de ir de Sintra para Cascais? Existe um ônibus direto entre Sintra e Cascais? Qual a distância entre Sintra e Cascais? Como viajar de Sintra para Cascais sem carro? Qual a distância entre Sintra e Cascais? Aonde pegar ônibus de Sintra para Cascais?'}
```
### Obtaining the complete 10M dataset version
```python
>>> quati_10M_all = load_dataset("unicamp-dl/quati", "quati_10M_passages")
>>> quati_10M_all
DatasetDict({
quati_10M_passages_part_00: Dataset({
features: ['passage_id', 'passage'],
num_rows: 2000000
})
quati_10M_passages_part_01: Dataset({
features: ['passage_id', 'passage'],
num_rows: 2000000
})
quati_10M_passages_part_02: Dataset({
features: ['passage_id', 'passage'],
num_rows: 2000000
})
quati_10M_passages_part_03: Dataset({
features: ['passage_id', 'passage'],
num_rows: 2000000
})
quati_10M_passages_part_04: Dataset({
features: ['passage_id', 'passage'],
num_rows: 2000000
})
})
```
It is possible to download only single parts of the 10M dataset directly passing the corresponding configuration name ― e.g. `quati_10M_passages_part_02`.
### Obtaining the qrels
```python
>>> quati_1M_qrels = load_dataset("unicamp-dl/quati", "quati_1M_qrels")
>>> quati_1M_qrels['quati_1M_qrels'][0]
{'query_id': 1, 'passage_id': 'clueweb22-pt0000-78-09747_0', 'score': 1}
```
```python
>>> quati_10M_qrels = load_dataset("unicamp-dl/quati", "quati_10M_qrels")
>>> quati_10M_qrels['quati_10M_qrels'][2560]
{'query_id': 105, 'passage_id': 'clueweb22-pt0001-93-08394_4', 'score': 0}
```
### Obtaining the topics
```python
>>> quati_all_topics = load_dataset("unicamp-dl/quati", "quati_all_topics")
>>> quati_all_topics['quati_all_topics'][-1]
{'query_id': 199,
'query': 'Quais foram os principais artistas do Impressionismo?'}
```
```python
>>> quati_test_topics = load_dataset("unicamp-dl/quati", "quati_test_topics")
>>> quati_test_topics['quati_test_topics'][0]
{'query_id': 1, 'query': 'Qual a maior característica da fauna brasileira?'}
```
### Cloning the dataset repository from Hugging Face
It is possible to obtain the entire dataset cloning it git repository from Hugging Face Hub:
```
git clone git@hf.co:datasets/unicamp-dl/quati quati_dataset
```
|
ColumbiaNLP/FLUTE | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
- machine-generated
- crowdsourced
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: FLUTE
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text2text-generation
task_ids:
- natural-language-inference
- explanation-generation
---
# Dataset Card for FigLang2022SharedTask
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://figlang2022sharedtask.github.io/
- **Repository:**
- **Paper:** TBA
- **Point of Contact:** tuhin.chakr@cs.columbia.edu
### Dataset Summary
Model in the loop approach for fig lang generation and explainability
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
TBA
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
sbx/superlim-2 | ---
annotations_creators:
- other
language:
- sv
language_creators:
- other
license: []
multilinguality:
- monolingual
pretty_name: 'A standardized suite for evaluation and analysis of Swedish natural
language understanding systems.'
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- multiple-choice
- text-classification
- question-answering
- sentence-similarity
- token-classification
task_ids:
- sentiment-analysis
- acceptability-classification
- closed-domain-qa
- word-sense-disambiguation
- coreference-resolution
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The official homepage of Språkbanken](https://spraakbanken.gu.se/resurser/superlim/)
- **Repository:**
- **Paper:**[SwedishGLUE – Towards a Swedish Test Set for Evaluating Natural Language Understanding Models](https://gup.ub.gu.se/publication/299130?lang=sv)
- **Leaderboard:** [To be implemented]
- **Point of Contact:**[sb-info@svenska.gu.se](sb-info@svenska.gu.se)
### Dataset Summary
SuperLim 2.0 is a continuation of SuperLim 1.0, which aims for a standardized suite for evaluation and analysis of Swedish natural language understanding systems. The projects is inspired by the GLUE/SuperGLUE projects from which the name is derived: "lim" is the Swedish translation of "glue".
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Swedish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
Most datasets have a train, dev and test split. However, there are a few (`supersim`, `sweanalogy` and `swesat-synonyms`) who only have a train and test split. The diagnostic tasks `swediagnostics` and `swewinogender` only have a test split, but they could be evaluated on models trained on `swenli` since they are also NLI-based.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
To cite as a whole, use the standard reference. If you use or reference individual resources, cite the references specific for these resources:
Standard reference:
To appear in EMNLP 2023, citation will come soon.
Dataset references:
[More information needed]
Thanks to [Felix Morger](https://github.com/felixhultin) for adding this dataset. |
biglam/europeana_newspapers | ---
annotations_creators:
- no-annotation
language:
- de
- fr
- el
- et
- fi
- hr
- ji
- pl
- ru
- sr
- sv
- uk
language_creators:
- machine-generated
multilinguality:
- multilingual
pretty_name: 'Europeana Newspapers '
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- newspapers
- lam
- OCR
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for Dataset Name
This dataset contains historic newspapers from [Europeana](https://pro.europeana.eu/page/iiif#download). In total the collection has ~32 Billion tokens. Documentation for this dataset is a WIP.
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
To download the full dataset using the `Datasets` library you can do the following
```python
from datasets import load_dataset
dataset = load_dataset("biglam/europeana_newspapers")
```
You can also access a subset based on language or decade ranges using the following function.
```python
from typing import List, Optional, Literal, Union
from huggingface_hub import hf_hub_url, list_repo_files
LanguageOption = Literal[
"et",
"pl",
"sr",
"ru",
"sv",
"no_language_found",
"ji",
"hr",
"el",
"uk",
"fr",
"fi",
"de",
"multi_language",
]
def get_files_for_lang_and_years(
languages: Union[None, List[LanguageOption]] = None,
min_year: Optional[int] = None,
max_year: Optional[int] = None,
):
files = list_repo_files("biglam/europeana_newspapers", repo_type="dataset")
parquet_files = [f for f in files if f.endswith(".parquet")]
parquet_files_filtered_for_lang = [
f for f in parquet_files if any(lang in f for lang in ["uk", "fr"])
]
filtered_files = [
f
for f in parquet_files
if (min_year is None or min_year <= int(f.split("-")[1].split(".")[0]))
and (max_year is None or int(f.split("-")[1].split(".")[0]) <= max_year)
]
return [
hf_hub_url("biglam/europeana_newspapers", f, repo_type="dataset")
for f in filtered_files
]
```
This function takes a list of language codes, and a min, max value for decades you want to include. You can can use this function to get the URLs for files you want to download from the Hub:
```python
ds = load_dataset("parquet", data_files=get_files_for_lang_and_years(['fr']), num_proc=4)
```
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
albertvillanova/universal_dependencies | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- af
- aii
- ajp
- akk
- am
- apu
- aqz
- ar
- be
- bg
- bho
- bm
- br
- bxr
- ca
- ckt
- cop
- cs
- cu
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fo
- fr
- fro
- ga
- gd
- gl
- got
- grc
- gsw
- gun
- gv
- he
- hi
- hr
- hsb
- hu
- hy
- id
- is
- it
- ja
- kfm
- kk
- kmr
- ko
- koi
- kpv
- krl
- la
- lt
- lv
- lzh
- mdf
- mr
- mt
- myu
- myv
- nl
- 'no'
- nyq
- olo
- orv
- otk
- pcm
- pl
- pt
- ro
- ru
- sa
- sk
- sl
- sme
- sms
- soj
- sq
- sr
- sv
- swl
- ta
- te
- th
- tl
- tpn
- tr
- ug
- uk
- ur
- vi
- wbp
- wo
- yo
- yue
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
paperswithcode_id: universal-dependencies
pretty_name: Universal Dependencies Treebank
config_names:
- af_afribooms
- aii_as
- ajp_madar
- akk_pisandub
- akk_riao
- am_att
- apu_ufpa
- aqz_tudet
- ar_nyuad
- ar_padt
- ar_pud
- be_hse
- bg_btb
- bho_bhtb
- bm_crb
- br_keb
- bxr_bdt
- ca_ancora
- ckt_hse
- cop_scriptorium
- cs_cac
- cs_cltt
- cs_fictree
- cs_pdt
- cs_pud
- cu_proiel
- cy_ccg
- da_ddt
- de_gsd
- de_hdt
- de_lit
- de_pud
- el_gdt
- en_esl
- en_ewt
- en_gum
- en_gumreddit
- en_lines
- en_partut
- en_pronouns
- en_pud
- es_ancora
- es_gsd
- es_pud
- et_edt
- et_ewt
- eu_bdt
- fa_perdt
- fa_seraji
- fi_ftb
- fi_ood
- fi_pud
- fi_tdt
- fo_farpahc
- fo_oft
- fr_fqb
- fr_ftb
- fr_gsd
- fr_partut
- fr_pud
- fr_sequoia
- fr_spoken
- fro_srcmf
- ga_idt
- gd_arcosg
- gl_ctg
- gl_treegal
- got_proiel
- grc_perseus
- grc_proiel
- gsw_uzh
- gun_dooley
- gun_thomas
- gv_cadhan
- he_htb
- hi_hdtb
- hi_pud
- hr_set
- hsb_ufal
- hu_szeged
- hy_armtdp
- id_csui
- id_gsd
- id_pud
- is_icepahc
- is_pud
- it_isdt
- it_partut
- it_postwita
- it_pud
- it_twittiro
- it_vit
- ja_bccwj
- ja_gsd
- ja_modern
- ja_pud
- kfm_aha
- kk_ktb
- kmr_mg
- ko_gsd
- ko_kaist
- ko_pud
- koi_uh
- kpv_ikdp
- kpv_lattice
- krl_kkpp
- la_ittb
- la_llct
- la_perseus
- la_proiel
- lt_alksnis
- lt_hse
- lv_lvtb
- lzh_kyoto
- mdf_jr
- mr_ufal
- mt_mudt
- myu_tudet
- myv_jr
- nl_alpino
- nl_lassysmall
- no_bokmaal
- no_nynorsk
- no_nynorsklia
- nyq_aha
- olo_kkpp
- orv_rnc
- orv_torot
- otk_tonqq
- pcm_nsc
- pl_lfg
- pl_pdb
- pl_pud
- pt_bosque
- pt_gsd
- pt_pud
- qhe_hiencs
- qtd_sagt
- ro_nonstandard
- ro_rrt
- ro_simonero
- ru_gsd
- ru_pud
- ru_syntagrus
- ru_taiga
- sa_ufal
- sa_vedic
- sk_snk
- sl_ssj
- sl_sst
- sme_giella
- sms_giellagas
- soj_aha
- sq_tsa
- sr_set
- sv_lines
- sv_pud
- sv_talbanken
- swl_sslc
- ta_mwtt
- ta_ttb
- te_mtg
- th_pud
- tl_trg
- tl_ugnayan
- tpn_tudet
- tr_boun
- tr_gb
- tr_imst
- tr_pud
- ug_udt
- uk_iu
- ur_udtb
- vi_vtb
- wbp_ufal
- wo_wtb
- yo_ytb
- yue_hk
- zh_cfl
- zh_gsd
- zh_gsdsimp
- zh_hk
- zh_pud
tags:
- constituency-parsing
- dependency-parsing
dataset_info:
- config_name: af_afribooms
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3523113
num_examples: 1315
- name: validation
num_bytes: 547285
num_examples: 194
- name: test
num_bytes: 1050299
num_examples: 425
download_size: 3088237
dataset_size: 5120697
- config_name: akk_pisandub
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 153470
num_examples: 101
download_size: 101789
dataset_size: 153470
- config_name: akk_riao
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3374577
num_examples: 1804
download_size: 2022357
dataset_size: 3374577
- config_name: aqz_tudet
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8286
num_examples: 24
download_size: 5683
dataset_size: 8286
- config_name: sq_tsa
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 116034
num_examples: 60
download_size: 68875
dataset_size: 116034
- config_name: am_att
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1554859
num_examples: 1074
download_size: 1019607
dataset_size: 1554859
- config_name: grc_perseus
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22611612
num_examples: 11476
- name: validation
num_bytes: 3152233
num_examples: 1137
- name: test
num_bytes: 3004502
num_examples: 1306
download_size: 18898313
dataset_size: 28768347
- config_name: grc_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 30938089
num_examples: 15014
- name: validation
num_bytes: 2264551
num_examples: 1019
- name: test
num_bytes: 2192289
num_examples: 1047
download_size: 23715831
dataset_size: 35394929
- config_name: apu_ufpa
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 75578
num_examples: 76
download_size: 69565
dataset_size: 75578
- config_name: ar_nyuad
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 79064476
num_examples: 15789
- name: validation
num_bytes: 9859912
num_examples: 1986
- name: test
num_bytes: 9880240
num_examples: 1963
download_size: 58583673
dataset_size: 98804628
- config_name: ar_padt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 58537298
num_examples: 6075
- name: validation
num_bytes: 7787253
num_examples: 909
- name: test
num_bytes: 7428063
num_examples: 680
download_size: 51208169
dataset_size: 73752614
- config_name: ar_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2816625
num_examples: 1000
download_size: 2084082
dataset_size: 2816625
- config_name: hy_armtdp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 7697891
num_examples: 1975
- name: validation
num_bytes: 988849
num_examples: 249
- name: test
num_bytes: 947287
num_examples: 278
download_size: 6886567
dataset_size: 9634027
- config_name: aii_as
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 52540
num_examples: 57
download_size: 32639
dataset_size: 52540
- config_name: bm_crb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1502886
num_examples: 1026
download_size: 892924
dataset_size: 1502886
- config_name: eu_bdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8199861
num_examples: 5396
- name: validation
num_bytes: 2701073
num_examples: 1798
- name: test
num_bytes: 2734601
num_examples: 1799
download_size: 8213576
dataset_size: 13635535
- config_name: be_hse
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 34880663
num_examples: 21555
- name: validation
num_bytes: 1745668
num_examples: 1090
- name: test
num_bytes: 1818113
num_examples: 889
download_size: 26433402
dataset_size: 38444444
- config_name: bho_bhtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 947740
num_examples: 357
download_size: 614159
dataset_size: 947740
- config_name: br_keb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1026257
num_examples: 888
download_size: 679680
dataset_size: 1026257
- config_name: bg_btb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18545312
num_examples: 8907
- name: validation
num_bytes: 2393174
num_examples: 1115
- name: test
num_bytes: 2344136
num_examples: 1116
download_size: 14910603
dataset_size: 23282622
- config_name: bxr_bdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 17364
num_examples: 19
- name: test
num_bytes: 1116630
num_examples: 908
download_size: 726053
dataset_size: 1133994
- config_name: yue_hk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1242850
num_examples: 1004
download_size: 710060
dataset_size: 1242850
- config_name: ca_ancora
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 46502842
num_examples: 13123
- name: validation
num_bytes: 6282364
num_examples: 1709
- name: test
num_bytes: 6441038
num_examples: 1846
download_size: 35924146
dataset_size: 59226244
- config_name: zh_cfl
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 660584
num_examples: 451
download_size: 384725
dataset_size: 660584
- config_name: zh_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9268661
num_examples: 3997
- name: validation
num_bytes: 1188371
num_examples: 500
- name: test
num_bytes: 1130467
num_examples: 500
download_size: 6828367
dataset_size: 11587499
- config_name: zh_gsdsimp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9268663
num_examples: 3997
- name: validation
num_bytes: 1188383
num_examples: 500
- name: test
num_bytes: 1130459
num_examples: 500
download_size: 6828419
dataset_size: 11587505
- config_name: zh_hk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 880193
num_examples: 1004
download_size: 494447
dataset_size: 880193
- config_name: zh_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2425817
num_examples: 1000
download_size: 1606982
dataset_size: 2425817
- config_name: ckt_hse
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 808669
num_examples: 1004
download_size: 771943
dataset_size: 808669
- config_name: lzh_kyoto
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 26615708
num_examples: 38669
- name: validation
num_bytes: 3770507
num_examples: 5296
- name: test
num_bytes: 3155207
num_examples: 4469
download_size: 22658287
dataset_size: 33541422
- config_name: cop_scriptorium
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3944468
num_examples: 1089
- name: validation
num_bytes: 1566786
num_examples: 381
- name: test
num_bytes: 1487709
num_examples: 403
download_size: 4502996
dataset_size: 6998963
- config_name: hr_set
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 19104315
num_examples: 6914
- name: validation
num_bytes: 2787184
num_examples: 960
- name: test
num_bytes: 3035797
num_examples: 1136
download_size: 15103034
dataset_size: 24927296
- config_name: cs_cac
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 81527862
num_examples: 23478
- name: validation
num_bytes: 1898678
num_examples: 603
- name: test
num_bytes: 1878841
num_examples: 628
download_size: 55990235
dataset_size: 85305381
- config_name: cs_cltt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 4277239
num_examples: 860
- name: validation
num_bytes: 752253
num_examples: 129
- name: test
num_bytes: 646103
num_examples: 136
download_size: 3745656
dataset_size: 5675595
- config_name: cs_fictree
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 21490020
num_examples: 10160
- name: validation
num_bytes: 2677727
num_examples: 1309
- name: test
num_bytes: 2679930
num_examples: 1291
download_size: 17464342
dataset_size: 26847677
- config_name: cs_pdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 201356662
num_examples: 68495
- name: validation
num_bytes: 27366981
num_examples: 9270
- name: test
num_bytes: 29817339
num_examples: 10148
download_size: 171506068
dataset_size: 258540982
- config_name: cs_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3195818
num_examples: 1000
download_size: 2231853
dataset_size: 3195818
- config_name: da_ddt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8689809
num_examples: 4383
- name: validation
num_bytes: 1117939
num_examples: 564
- name: test
num_bytes: 1082651
num_examples: 565
download_size: 6425281
dataset_size: 10890399
- config_name: nl_alpino
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22503950
num_examples: 12264
- name: validation
num_bytes: 1411253
num_examples: 718
- name: test
num_bytes: 1354908
num_examples: 596
download_size: 16858557
dataset_size: 25270111
- config_name: nl_lassysmall
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9001614
num_examples: 5787
- name: validation
num_bytes: 1361552
num_examples: 676
- name: test
num_bytes: 1391136
num_examples: 875
download_size: 8034396
dataset_size: 11754302
- config_name: en_esl
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5335977
num_examples: 4124
- name: validation
num_bytes: 648562
num_examples: 500
- name: test
num_bytes: 651829
num_examples: 500
download_size: 3351548
dataset_size: 6636368
- config_name: en_ewt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22755753
num_examples: 12543
- name: validation
num_bytes: 2829889
num_examples: 2002
- name: test
num_bytes: 2820398
num_examples: 2077
download_size: 16893922
dataset_size: 28406040
- config_name: en_gum
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8999554
num_examples: 4287
- name: validation
num_bytes: 1704949
num_examples: 784
- name: test
num_bytes: 1743317
num_examples: 890
download_size: 7702761
dataset_size: 12447820
- config_name: en_gumreddit
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1365930
num_examples: 587
- name: validation
num_bytes: 317546
num_examples: 150
- name: test
num_bytes: 374707
num_examples: 158
download_size: 1195979
dataset_size: 2058183
- config_name: en_lines
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5728898
num_examples: 3176
- name: validation
num_bytes: 1911762
num_examples: 1032
- name: test
num_bytes: 1766797
num_examples: 1035
download_size: 5522254
dataset_size: 9407457
- config_name: en_partut
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 4133445
num_examples: 1781
- name: validation
num_bytes: 265039
num_examples: 156
- name: test
num_bytes: 326834
num_examples: 153
download_size: 2720286
dataset_size: 4725318
- config_name: en_pronouns
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 207364
num_examples: 285
download_size: 147181
dataset_size: 207364
- config_name: en_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2282027
num_examples: 1000
download_size: 1340563
dataset_size: 2282027
- config_name: myv_jr
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2763297
num_examples: 1690
download_size: 1945981
dataset_size: 2763297
- config_name: et_edt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 42901059
num_examples: 24633
- name: validation
num_bytes: 5551620
num_examples: 3125
- name: test
num_bytes: 5994421
num_examples: 3214
download_size: 32393618
dataset_size: 54447100
- config_name: et_ewt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 4199896
num_examples: 2837
- name: validation
num_bytes: 1089459
num_examples: 743
- name: test
num_bytes: 1600116
num_examples: 913
download_size: 4044147
dataset_size: 6889471
- config_name: fo_farpahc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2114958
num_examples: 1020
- name: validation
num_bytes: 809707
num_examples: 300
- name: test
num_bytes: 798245
num_examples: 301
download_size: 2186706
dataset_size: 3722910
- config_name: fo_oft
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1220792
num_examples: 1208
download_size: 802681
dataset_size: 1220792
- config_name: fi_ftb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16800109
num_examples: 14981
- name: validation
num_bytes: 2074201
num_examples: 1875
- name: test
num_bytes: 2144908
num_examples: 1867
download_size: 13132466
dataset_size: 21019218
- config_name: fi_ood
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2366923
num_examples: 2122
download_size: 1480506
dataset_size: 2366923
- config_name: fi_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2086421
num_examples: 1000
download_size: 1411514
dataset_size: 2086421
- config_name: fi_tdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22065448
num_examples: 12217
- name: validation
num_bytes: 2483303
num_examples: 1364
- name: test
num_bytes: 2855263
num_examples: 1555
download_size: 16692242
dataset_size: 27404014
- config_name: fr_fqb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2674644
num_examples: 2289
download_size: 1556235
dataset_size: 2674644
- config_name: fr_ftb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 44714315
num_examples: 14759
- name: validation
num_bytes: 3929428
num_examples: 1235
- name: test
num_bytes: 7583038
num_examples: 2541
download_size: 30926802
dataset_size: 56226781
- config_name: fr_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 38329902
num_examples: 14449
- name: validation
num_bytes: 3861548
num_examples: 1476
- name: test
num_bytes: 1086926
num_examples: 416
download_size: 25492044
dataset_size: 43278376
- config_name: fr_partut
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2620477
num_examples: 803
- name: validation
num_bytes: 205839
num_examples: 107
- name: test
num_bytes: 288829
num_examples: 110
download_size: 1817897
dataset_size: 3115145
- config_name: fr_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2660405
num_examples: 1000
download_size: 1685033
dataset_size: 2660405
- config_name: fr_sequoia
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5370647
num_examples: 2231
- name: validation
num_bytes: 1065411
num_examples: 412
- name: test
num_bytes: 1067676
num_examples: 456
download_size: 4415282
dataset_size: 7503734
- config_name: fr_spoken
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1625626
num_examples: 1167
- name: validation
num_bytes: 1091750
num_examples: 909
- name: test
num_bytes: 1078438
num_examples: 730
download_size: 2483341
dataset_size: 3795814
- config_name: gl_ctg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 8157432
num_examples: 2272
- name: validation
num_bytes: 3057483
num_examples: 860
- name: test
num_bytes: 3053764
num_examples: 861
download_size: 8230649
dataset_size: 14268679
- config_name: gl_treegal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1804389
num_examples: 600
- name: test
num_bytes: 1174023
num_examples: 400
download_size: 1741471
dataset_size: 2978412
- config_name: de_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 32297384
num_examples: 13814
- name: validation
num_bytes: 1504189
num_examples: 799
- name: test
num_bytes: 2000117
num_examples: 977
download_size: 21507364
dataset_size: 35801690
- config_name: de_hdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 334214761
num_examples: 153035
- name: validation
num_bytes: 39099013
num_examples: 18434
- name: test
num_bytes: 39519143
num_examples: 18459
download_size: 249243037
dataset_size: 412832917
- config_name: de_lit
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3327891
num_examples: 1922
download_size: 2060988
dataset_size: 3327891
- config_name: de_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2684407
num_examples: 1000
download_size: 1731875
dataset_size: 2684407
- config_name: got_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5175361
num_examples: 3387
- name: validation
num_bytes: 1498101
num_examples: 985
- name: test
num_bytes: 1518642
num_examples: 1029
download_size: 5225655
dataset_size: 8192104
- config_name: el_gdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 6028077
num_examples: 1662
- name: validation
num_bytes: 1492610
num_examples: 403
- name: test
num_bytes: 1521094
num_examples: 456
download_size: 5788161
dataset_size: 9041781
- config_name: he_htb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 17324640
num_examples: 5241
- name: validation
num_bytes: 1440985
num_examples: 484
- name: test
num_bytes: 1550465
num_examples: 491
download_size: 12054025
dataset_size: 20316090
- config_name: qhe_hiencs
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1510145
num_examples: 1448
- name: validation
num_bytes: 244129
num_examples: 225
- name: test
num_bytes: 236291
num_examples: 225
download_size: 914584
dataset_size: 1990565
- config_name: hi_hdtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 61893814
num_examples: 13304
- name: validation
num_bytes: 7748544
num_examples: 1659
- name: test
num_bytes: 7786343
num_examples: 1684
download_size: 51589681
dataset_size: 77428701
- config_name: hi_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3384789
num_examples: 1000
download_size: 2303495
dataset_size: 3384789
- config_name: hu_szeged
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2822934
num_examples: 910
- name: validation
num_bytes: 1584932
num_examples: 441
- name: test
num_bytes: 1419130
num_examples: 449
download_size: 3687905
dataset_size: 5826996
- config_name: is_icepahc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 97197159
num_examples: 34007
- name: validation
num_bytes: 18931295
num_examples: 4865
- name: test
num_bytes: 19039838
num_examples: 5157
download_size: 85106126
dataset_size: 135168292
- config_name: is_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2304432
num_examples: 1000
download_size: 1525635
dataset_size: 2304432
- config_name: id_csui
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1611334
num_examples: 656
- name: test
num_bytes: 888832
num_examples: 374
download_size: 1448601
dataset_size: 2500166
- config_name: id_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 11728948
num_examples: 4477
- name: validation
num_bytes: 1513894
num_examples: 559
- name: test
num_bytes: 1417208
num_examples: 557
download_size: 9487349
dataset_size: 14660050
- config_name: id_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1768596
num_examples: 1000
download_size: 1149692
dataset_size: 1768596
- config_name: ga_idt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 10327215
num_examples: 4005
- name: validation
num_bytes: 1057313
num_examples: 451
- name: test
num_bytes: 1109028
num_examples: 454
download_size: 7417728
dataset_size: 12493556
- config_name: it_isdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 33510781
num_examples: 13121
- name: validation
num_bytes: 1439348
num_examples: 564
- name: test
num_bytes: 1267932
num_examples: 482
download_size: 20998527
dataset_size: 36218061
- config_name: it_partut
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5428686
num_examples: 1781
- name: validation
num_bytes: 335085
num_examples: 156
- name: test
num_bytes: 413752
num_examples: 153
download_size: 3582155
dataset_size: 6177523
- config_name: it_postwita
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 10523322
num_examples: 5368
- name: validation
num_bytes: 1299818
num_examples: 671
- name: test
num_bytes: 1344079
num_examples: 674
download_size: 7611319
dataset_size: 13167219
- config_name: it_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2612838
num_examples: 1000
download_size: 1641073
dataset_size: 2612838
- config_name: it_twittiro
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2536429
num_examples: 1138
- name: validation
num_bytes: 323504
num_examples: 144
- name: test
num_bytes: 316211
num_examples: 142
download_size: 1894686
dataset_size: 3176144
- config_name: it_vit
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 24536095
num_examples: 8277
- name: validation
num_bytes: 3144507
num_examples: 743
- name: test
num_bytes: 2870355
num_examples: 1067
download_size: 17605311
dataset_size: 30550957
- config_name: ja_bccwj
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 119164443
num_examples: 40740
- name: validation
num_bytes: 23390188
num_examples: 8417
- name: test
num_bytes: 21904413
num_examples: 7871
download_size: 87340125
dataset_size: 164459044
- config_name: ja_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 36905139
num_examples: 7027
- name: validation
num_bytes: 2662999
num_examples: 501
- name: test
num_bytes: 2858141
num_examples: 543
download_size: 30397358
dataset_size: 42426279
- config_name: ja_modern
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 3062149
num_examples: 822
download_size: 2163988
dataset_size: 3062149
- config_name: ja_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 6322307
num_examples: 1000
download_size: 4661525
dataset_size: 6322307
- config_name: krl_kkpp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 370378
num_examples: 228
download_size: 226103
dataset_size: 370378
- config_name: kk_ktb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 64737
num_examples: 31
- name: test
num_bytes: 1263246
num_examples: 1047
download_size: 849300
dataset_size: 1327983
- config_name: kfm_aha
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8464
num_examples: 10
download_size: 6290
dataset_size: 8464
- config_name: koi_uh
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 117629
num_examples: 81
download_size: 91509
dataset_size: 117629
- config_name: kpv_ikdp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 182189
num_examples: 132
download_size: 121684
dataset_size: 182189
- config_name: kpv_lattice
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 685683
num_examples: 435
download_size: 467085
dataset_size: 685683
- config_name: ko_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5480313
num_examples: 4400
- name: validation
num_bytes: 1156603
num_examples: 950
- name: test
num_bytes: 1129555
num_examples: 989
download_size: 4882238
dataset_size: 7766471
- config_name: ko_kaist
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 29037654
num_examples: 23010
- name: validation
num_bytes: 2511880
num_examples: 2066
- name: test
num_bytes: 2792215
num_examples: 2287
download_size: 21855177
dataset_size: 34341749
- config_name: ko_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2511856
num_examples: 1000
download_size: 2024810
dataset_size: 2511856
- config_name: kmr_mg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 30374
num_examples: 20
- name: test
num_bytes: 1248564
num_examples: 734
download_size: 765158
dataset_size: 1278938
- config_name: la_ittb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 54306304
num_examples: 22775
- name: validation
num_bytes: 4236222
num_examples: 2101
- name: test
num_bytes: 4221459
num_examples: 2101
download_size: 40247546
dataset_size: 62763985
- config_name: la_llct
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 26885433
num_examples: 7289
- name: validation
num_bytes: 3363915
num_examples: 850
- name: test
num_bytes: 3352500
num_examples: 884
download_size: 21975884
dataset_size: 33601848
- config_name: la_perseus
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2542043
num_examples: 1334
- name: test
num_bytes: 1575350
num_examples: 939
download_size: 2573703
dataset_size: 4117393
- config_name: la_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 24956038
num_examples: 15917
- name: validation
num_bytes: 2020476
num_examples: 1234
- name: test
num_bytes: 2029828
num_examples: 1260
download_size: 18434442
dataset_size: 29006342
- config_name: lv_lvtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 29167529
num_examples: 10156
- name: validation
num_bytes: 4501172
num_examples: 1664
- name: test
num_bytes: 4565919
num_examples: 1823
download_size: 25227301
dataset_size: 38234620
- config_name: lt_alksnis
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 7272501
num_examples: 2341
- name: validation
num_bytes: 1763901
num_examples: 617
- name: test
num_bytes: 1648521
num_examples: 684
download_size: 7008248
dataset_size: 10684923
- config_name: lt_hse
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 433214
num_examples: 153
- name: validation
num_bytes: 433214
num_examples: 153
- name: test
num_bytes: 433214
num_examples: 153
download_size: 265619
dataset_size: 1299642
- config_name: olo_kkpp
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18096
num_examples: 19
- name: test
num_bytes: 175355
num_examples: 106
download_size: 121837
dataset_size: 193451
- config_name: mt_mudt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1858001
num_examples: 1123
- name: validation
num_bytes: 826004
num_examples: 433
- name: test
num_bytes: 892629
num_examples: 518
download_size: 2011753
dataset_size: 3576634
- config_name: gv_cadhan
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 483042
num_examples: 291
download_size: 287206
dataset_size: 483042
- config_name: mr_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 420345
num_examples: 373
- name: validation
num_bytes: 60791
num_examples: 46
- name: test
num_bytes: 56582
num_examples: 47
download_size: 339354
dataset_size: 537718
- config_name: gun_dooley
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 1037858
num_examples: 1046
download_size: 571571
dataset_size: 1037858
- config_name: gun_thomas
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 143111
num_examples: 98
download_size: 92963
dataset_size: 143111
- config_name: mdf_jr
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 234147
num_examples: 167
download_size: 162330
dataset_size: 234147
- config_name: myu_tudet
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 26202
num_examples: 62
download_size: 20315
dataset_size: 26202
- config_name: pcm_nsc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16079391
num_examples: 7279
- name: validation
num_bytes: 2099571
num_examples: 991
- name: test
num_bytes: 2063685
num_examples: 972
download_size: 14907410
dataset_size: 20242647
- config_name: nyq_aha
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8723
num_examples: 10
download_size: 6387
dataset_size: 8723
- config_name: sme_giella
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1987666
num_examples: 2257
- name: test
num_bytes: 1142396
num_examples: 865
download_size: 1862302
dataset_size: 3130062
- config_name: no_bokmaal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 25647647
num_examples: 15696
- name: validation
num_bytes: 3828310
num_examples: 2409
- name: test
num_bytes: 3151638
num_examples: 1939
download_size: 19177350
dataset_size: 32627595
- config_name: no_nynorsk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 25630539
num_examples: 14174
- name: validation
num_bytes: 3277649
num_examples: 1890
- name: test
num_bytes: 2601676
num_examples: 1511
download_size: 18532495
dataset_size: 31509864
- config_name: no_nynorsklia
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3500907
num_examples: 3412
- name: validation
num_bytes: 1003845
num_examples: 881
- name: test
num_bytes: 999943
num_examples: 957
download_size: 3349676
dataset_size: 5504695
- config_name: cu_proiel
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 6106144
num_examples: 4124
- name: validation
num_bytes: 1639912
num_examples: 1073
- name: test
num_bytes: 1648459
num_examples: 1141
download_size: 6239839
dataset_size: 9394515
- config_name: fro_srcmf
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 11959859
num_examples: 13909
- name: validation
num_bytes: 1526574
num_examples: 1842
- name: test
num_bytes: 1535923
num_examples: 1927
download_size: 9043098
dataset_size: 15022356
- config_name: orv_rnc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1527306
num_examples: 320
- name: test
num_bytes: 2552216
num_examples: 637
download_size: 2627398
dataset_size: 4079522
- config_name: orv_torot
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18077991
num_examples: 13336
- name: validation
num_bytes: 2408313
num_examples: 1852
- name: test
num_bytes: 2347934
num_examples: 1756
download_size: 15296362
dataset_size: 22834238
- config_name: otk_tonqq
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 22829
num_examples: 18
download_size: 14389
dataset_size: 22829
- config_name: fa_perdt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 48654947
num_examples: 26196
- name: validation
num_bytes: 2687750
num_examples: 1456
- name: test
num_bytes: 2600303
num_examples: 1455
download_size: 33606395
dataset_size: 53943000
- config_name: fa_seraji
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 12627691
num_examples: 4798
- name: validation
num_bytes: 1634327
num_examples: 599
- name: test
num_bytes: 1675134
num_examples: 600
download_size: 9890107
dataset_size: 15937152
- config_name: pl_lfg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16810910
num_examples: 13774
- name: validation
num_bytes: 2093712
num_examples: 1745
- name: test
num_bytes: 2100915
num_examples: 1727
download_size: 14865541
dataset_size: 21005537
- config_name: pl_pdb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 44652289
num_examples: 17722
- name: validation
num_bytes: 5494883
num_examples: 2215
- name: test
num_bytes: 5322608
num_examples: 2215
download_size: 36340919
dataset_size: 55469780
- config_name: pl_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2943603
num_examples: 1000
download_size: 1943983
dataset_size: 2943603
- config_name: pt_bosque
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22808617
num_examples: 8328
- name: validation
num_bytes: 1201577
num_examples: 560
- name: test
num_bytes: 1131511
num_examples: 476
download_size: 15201503
dataset_size: 25141705
- config_name: pt_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 22208385
num_examples: 9664
- name: validation
num_bytes: 2805628
num_examples: 1210
- name: test
num_bytes: 2732063
num_examples: 1204
download_size: 15300844
dataset_size: 27746076
- config_name: pt_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2431942
num_examples: 1000
download_size: 1516883
dataset_size: 2431942
- config_name: ro_nonstandard
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 74489083
num_examples: 24121
- name: validation
num_bytes: 2663152
num_examples: 1052
- name: test
num_bytes: 3017162
num_examples: 1052
download_size: 50345748
dataset_size: 80169397
- config_name: ro_rrt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 23695399
num_examples: 8043
- name: validation
num_bytes: 2190973
num_examples: 752
- name: test
num_bytes: 2092520
num_examples: 729
download_size: 17187956
dataset_size: 27978892
- config_name: ro_simonero
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 15390734
num_examples: 3747
- name: validation
num_bytes: 1926639
num_examples: 443
- name: test
num_bytes: 1940787
num_examples: 491
download_size: 11409378
dataset_size: 19258160
- config_name: ru_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 10504099
num_examples: 3850
- name: validation
num_bytes: 1635884
num_examples: 579
- name: test
num_bytes: 1597603
num_examples: 601
download_size: 8830986
dataset_size: 13737586
- config_name: ru_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2695958
num_examples: 1000
download_size: 1869304
dataset_size: 2695958
- config_name: ru_syntagrus
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 126305584
num_examples: 48814
- name: validation
num_bytes: 17043673
num_examples: 6584
- name: test
num_bytes: 16880203
num_examples: 6491
download_size: 102745164
dataset_size: 160229460
- config_name: ru_taiga
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5802733
num_examples: 3138
- name: validation
num_bytes: 1382140
num_examples: 945
- name: test
num_bytes: 1314084
num_examples: 881
download_size: 5491427
dataset_size: 8498957
- config_name: sa_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 431697
num_examples: 230
download_size: 424675
dataset_size: 431697
- config_name: sa_vedic
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2179608
num_examples: 2524
- name: test
num_bytes: 1209605
num_examples: 1473
download_size: 2041583
dataset_size: 3389213
- config_name: gd_arcosg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 3952356
num_examples: 1990
- name: validation
num_bytes: 1038211
num_examples: 645
- name: test
num_bytes: 1034788
num_examples: 538
download_size: 3474087
dataset_size: 6025355
- config_name: sr_set
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9309552
num_examples: 3328
- name: validation
num_bytes: 1503953
num_examples: 536
- name: test
num_bytes: 1432672
num_examples: 520
download_size: 7414381
dataset_size: 12246177
- config_name: sms_giellagas
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 174744
num_examples: 104
download_size: 116491
dataset_size: 174744
- config_name: sk_snk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 12017312
num_examples: 8483
- name: validation
num_bytes: 1863926
num_examples: 1060
- name: test
num_bytes: 1943012
num_examples: 1061
download_size: 10013420
dataset_size: 15824250
- config_name: sl_ssj
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 16713639
num_examples: 6478
- name: validation
num_bytes: 2070847
num_examples: 734
- name: test
num_bytes: 2083062
num_examples: 788
download_size: 12455962
dataset_size: 20867548
- config_name: sl_sst
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2903675
num_examples: 2078
- name: test
num_bytes: 1493885
num_examples: 1110
download_size: 2655777
dataset_size: 4397560
- config_name: soj_aha
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 6218
num_examples: 8
download_size: 4577
dataset_size: 6218
- config_name: ajp_madar
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 71956
num_examples: 100
download_size: 43174
dataset_size: 71956
- config_name: es_ancora
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 50101327
num_examples: 14305
- name: validation
num_bytes: 5883940
num_examples: 1654
- name: test
num_bytes: 5928986
num_examples: 1721
download_size: 37668083
dataset_size: 61914253
- config_name: es_gsd
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 39582074
num_examples: 14187
- name: validation
num_bytes: 3834443
num_examples: 1400
- name: test
num_bytes: 1253720
num_examples: 426
download_size: 26073760
dataset_size: 44670237
- config_name: es_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2595946
num_examples: 1000
download_size: 1628475
dataset_size: 2595946
- config_name: swl_sslc
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 57443
num_examples: 87
- name: validation
num_bytes: 59002
num_examples: 82
- name: test
num_bytes: 24542
num_examples: 34
download_size: 81699
dataset_size: 140987
- config_name: sv_lines
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 6731662
num_examples: 3176
- name: validation
num_bytes: 2239951
num_examples: 1032
- name: test
num_bytes: 2070626
num_examples: 1035
download_size: 7245283
dataset_size: 11042239
- config_name: sv_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2554725
num_examples: 1000
download_size: 1722516
dataset_size: 2554725
- config_name: sv_talbanken
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 9287256
num_examples: 4303
- name: validation
num_bytes: 1361535
num_examples: 504
- name: test
num_bytes: 2835742
num_examples: 1219
download_size: 8476012
dataset_size: 13484533
- config_name: gsw_uzh
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 111357
num_examples: 100
download_size: 59675
dataset_size: 111357
- config_name: tl_trg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 86696
num_examples: 128
download_size: 61344
dataset_size: 86696
- config_name: tl_ugnayan
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 90863
num_examples: 94
download_size: 55207
dataset_size: 90863
- config_name: ta_mwtt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 522349
num_examples: 534
download_size: 414263
dataset_size: 522349
- config_name: ta_ttb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1538780
num_examples: 400
- name: validation
num_bytes: 305206
num_examples: 80
- name: test
num_bytes: 478941
num_examples: 120
download_size: 1753448
dataset_size: 2322927
- config_name: te_mtg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 703512
num_examples: 1051
- name: validation
num_bytes: 91547
num_examples: 131
- name: test
num_bytes: 99757
num_examples: 146
download_size: 643764
dataset_size: 894816
- config_name: th_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2341697
num_examples: 1000
download_size: 1606517
dataset_size: 2341697
- config_name: tpn_tudet
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 8089
num_examples: 8
download_size: 5447
dataset_size: 8089
- config_name: qtd_sagt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 583697
num_examples: 285
- name: validation
num_bytes: 1564765
num_examples: 801
- name: test
num_bytes: 1710777
num_examples: 805
download_size: 2299611
dataset_size: 3859239
- config_name: tr_boun
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 12827173
num_examples: 7803
- name: validation
num_bytes: 1577760
num_examples: 979
- name: test
num_bytes: 1580727
num_examples: 979
download_size: 9742035
dataset_size: 15985660
- config_name: tr_gb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2146729
num_examples: 2880
download_size: 1474083
dataset_size: 2146729
- config_name: tr_imst
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 5063905
num_examples: 3664
- name: validation
num_bytes: 1342351
num_examples: 988
- name: test
num_bytes: 1347524
num_examples: 983
download_size: 4711018
dataset_size: 7753780
- config_name: tr_pud
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 2021772
num_examples: 1000
download_size: 1359487
dataset_size: 2021772
- config_name: uk_iu
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 18886802
num_examples: 5496
- name: validation
num_bytes: 2592721
num_examples: 672
- name: test
num_bytes: 3561164
num_examples: 892
download_size: 17344586
dataset_size: 25040687
- config_name: hsb_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 54257
num_examples: 23
- name: test
num_bytes: 1246592
num_examples: 623
download_size: 781067
dataset_size: 1300849
- config_name: ur_udtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 19808745
num_examples: 4043
- name: validation
num_bytes: 2652349
num_examples: 552
- name: test
num_bytes: 2702596
num_examples: 535
download_size: 15901007
dataset_size: 25163690
- config_name: ug_udt
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2570856
num_examples: 1656
- name: validation
num_bytes: 1406032
num_examples: 900
- name: test
num_bytes: 1371993
num_examples: 900
download_size: 3455092
dataset_size: 5348881
- config_name: vi_vtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1689772
num_examples: 1400
- name: validation
num_bytes: 948019
num_examples: 800
- name: test
num_bytes: 987207
num_examples: 800
download_size: 2055529
dataset_size: 3624998
- config_name: wbp_ufal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 48533
num_examples: 55
download_size: 38326
dataset_size: 48533
- config_name: cy_ccg
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1629465
num_examples: 704
- name: test
num_bytes: 1779002
num_examples: 953
download_size: 1984759
dataset_size: 3408467
- config_name: wo_wtb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 2781883
num_examples: 1188
- name: validation
num_bytes: 1204839
num_examples: 449
- name: test
num_bytes: 1227124
num_examples: 470
download_size: 3042699
dataset_size: 5213846
- config_name: yo_ytb
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: upos
sequence:
class_label:
names:
0: NOUN
1: PUNCT
2: ADP
3: NUM
4: SYM
5: SCONJ
6: ADJ
7: PART
8: DET
9: CCONJ
10: PROPN
11: PRON
12: X
13: _
14: ADV
15: INTJ
16: VERB
17: AUX
- name: xpos
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: test
num_bytes: 905766
num_examples: 318
download_size: 567955
dataset_size: 905766
---
# Dataset Card for Universal Dependencies Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Universal Dependencies](https://universaldependencies.org/)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@jplu](https://github.com/jplu) for adding this dataset. |
Cohere/wikipedia-22-12-de-embeddings | ---
annotations_creators:
- expert-generated
language:
- de
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (de)](https://de.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
Loie/VGGSound | ---
task_categories:
- audio-classification
size_categories:
- 100B<n<1T
---
# VGGSound
VGG-Sound is an audio-visual correspondent dataset consisting of short clips of audio sounds, extracted from videos uploaded to YouTube.
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/vggsound/
- **Paper:** https://arxiv.org/abs/2004.14368
- **Github:** https://github.com/hche11/VGGSound
## Analysis
- **310+ classes:** VGG-Sound contains audios spanning a large number of challenging acoustic environments and noise characteristics of real applications.
- **200,000+ videos:** All videos are captured "in the wild" with audio-visual correspondence in the sense that the sound source is visually evident.
- **550+ hours:** VGG-Sound consists of both audio and video. Each segment is 10 seconds long.
![](src/data.png)
## Download
We provide a csv file. For each YouTube video, we provide YouTube URLs, time stamps, audio labels and train/test split. Each line in the csv file has columns defined by here.
```
# YouTube ID, start seconds, label, train/test split.
```
And you can download VGGSound directly from this [repository](https://huggingface.co/datasets/Loie/VGGSound/tree/main).
## License
The VGG-Sound dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found [here](https://thor.robots.ox.ac.uk/datasets/vggsound/license_vggsound.txt).
## Citation
Please cite the following if you make use of the dataset.
```
@InProceedings{Chen20,
author = "Honglie Chen and Weidi Xie and Andrea Vedaldi and Andrew Zisserman",
title = "VGGSound: A Large-scale Audio-Visual Dataset",
booktitle = "International Conference on Acoustics, Speech, and Signal Processing (ICASSP)",
year = "2020",
}
``` |
fnlp/moss-002-sft-data | ---
license: cc-by-4.0
task_categories:
- conversational
- text-generation
language:
- en
- zh
size_categories:
- 1M<n<10M
---
# Dataset Card for "moss-002-sft-data"
## Dataset Description
- **Homepage:** [https://txsun1997.github.io/blogs/moss.html](https://txsun1997.github.io/blogs/moss.html)
- **Repository:** [https://github.com/OpenLMLab/MOSS](https://github.com/OpenLMLab/MOSS)
- **Total amount of disk used:** 2.16 GB
### Dataset Summary
An open-source conversational dataset that was used to train MOSS-002. The user prompts are extended based on a small set of human-written seed prompts in a way similar to [Self-Instruct](https://arxiv.org/abs/2212.10560). The AI responses are generated using `text-davinci-003`. The user prompts of `en_harmlessness` are from [Anthropic red teaming data](https://github.com/anthropics/hh-rlhf/tree/master/red-team-attempts).
### Data Splits
| name | \# samples |
|----------------------|-----------:|
| en_helpfulness.json | 419049 |
| en_honesty.json | 112580 |
| en_harmlessness.json | 38873 |
| zh_helpfulness.json | 447750 |
| zh_honesty.json | 142885 |
|
silk-road/Vanilla-chinese-alpaca-luotuo | ---
license: apache-2.0
language:
- zh
pretty_name: f
size_categories:
- 10K<n<100K
---
Vanilla骆驼是骆驼项目在23年3月21日启动的第一个数据集和模型
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [ ] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM)
骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。
( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 )
骆驼项目**不是**商汤科技的官方产品。
## Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{alpaca,
author={Ziang Leng, Qiyuan Chen and Cheng Li},
title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}},
}
```
|
bleugreen/typescript-instruct | ---
task_categories:
- text-classification
- text2text-generation
- summarization
language:
- en
tags:
- typescript
- instruct
- code
size_categories:
- 10K<n<100K
---
# typescript-instruct
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
# Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
```
ClassDeclaration - 2401
ArrowFunction - 16443
MethodDeclaration - 12096
FunctionDeclaration - 3226
TypeAliasDeclaration - 1489
InterfaceDeclaration - 5240
EnumDeclaration - 214
```
- Leading comments are added to the front of `content`
- Removed all chunks over max sequence length (2048)
- Deduplicated / cleaned up
- Generated instructions w/ `gpt-3.5-turbo`
- Ran into of OpenAI API for the month, will finish other half next month
# Dataset Structure
```python
from datasets import load_dataset
load_dataset("bleugreen/typescript-instruct")
DatasetDict({
train: Dataset({
features: ['type', 'content', 'repo', 'path', 'language', 'instruction'],
num_rows: 41109
})
})
``` |
Mutonix/RefGPT-Fact | ---
license: apache-2.0
dataset_info:
features:
- name: dialogue
dtype: string
- name: reference
dtype: string
- name: language
dtype: string
- name: type
dtype: string
splits:
- name: zh
num_bytes: 180760081
num_examples: 50000
- name: en
num_bytes: 464054853
num_examples: 50000
download_size: 260969665
dataset_size: 644814934
task_categories:
- conversational
language:
- zh
- en
arxiv: https://arxiv.org/abs/2305.14994
size_categories:
- 10K<n<100K
---
# Dataset Card for RefGPT-Fact
## Dataset Description
- **Homepage:**
- **Repository:** [https://github.com/ziliwangnlp/RefGPT](https://github.com/ziliwangnlp/RefGPT)
- **Paper:** [https://arxiv.org/abs/2305.14994](https://arxiv.org/abs/2305.14994)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<p align="center">
<a href="https://arxiv.org/abs/2305.14994"><b>[Paper] RefGPT</b></a> |
<a href="https://github.com/ziliwangnlp/RefGPT"><b>[Github] RefGPT</b></a>
</p>
RefGPT-Fact is a datasets containing 100k multi-turn dialogues about factual knowledge with 50k English and 50k Chinese. The English version uses the English Wikipedia as the reference and the Chinese version uses the frequently-used Chinese online encyclopedia website, Baidu Baike.
### Supported Tasks and Leaderboards
Chatbot instruction finetuning
### Languages
Chinese, English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Please pay attention that RefGPT Datasets, including RefGPT-Fact and RefGPT-Code, have not undergone manual verification, and as such, their security cannot be strictly guaranteed. Users should be aware that they are responsible for the results generated using this data.
### Discussion of Biases
As the datasets RefGPT-Fact and RefGPT-Code are collected by using the references like Wikipedia and Github repositories, it can not be avoided that the reference itself has factual errors, typos, or bugs and malicious code if it is from Github repositories. The datasets may also reflect the biases of the selected references and GPT-3.5/GPT-4 model
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{yang2023refgpt,
title={RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs},
author={Dongjie Yang and Ruifeng Yuan and YuanTao Fan and YiFei Yang and Zili Wang and Shusen Wang and Hai Zhao},
year={2023},
eprint={2305.14994},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
[More Information Needed] |
asapp/slue-phase-2 | ---
dataset_info:
- config_name: hvb
features:
- name: issue_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speaker_id
dtype: string
- name: text
dtype: string
- name: utt_index
dtype: int32
- name: channel
dtype: int32
- name: role
dtype: string
- name: start_ms
dtype: int32
- name: duration_ms
dtype: int32
- name: intent
dtype: string
- name: dialog_acts
sequence: string
splits:
- name: train
num_bytes: 803631533.648
num_examples: 11344
- name: validation
num_bytes: 115999281.63
num_examples: 1690
- name: test
num_bytes: 413280185.739
num_examples: 6121
download_size: 1287263357
dataset_size: 1332911001.017
- config_name: sqa5
features:
- name: question_id
dtype: string
- name: question_audio
dtype:
audio:
sampling_rate: 16000
- name: question_speaker_id
dtype: string
- name: raw_question_text
dtype: string
- name: normalized_question_text
dtype: string
- name: document_id
dtype: string
- name: document_audio
dtype:
audio:
sampling_rate: 16000
- name: document_speaker_id
dtype: string
- name: raw_document_text
dtype: string
- name: normalized_document_text
dtype: string
- name: word2time
sequence:
- name: word
dtype: string
- name: normalized_word
dtype: string
- name: start_second
dtype: float64
- name: end_second
dtype: float64
- name: answer_spans
sequence:
- name: answer
dtype: string
- name: start_second
dtype: float64
- name: end_second
dtype: float64
splits:
- name: train
num_bytes: 134775904845.04
num_examples: 46186
- name: validation
num_bytes: 5686714785.843
num_examples: 1939
- name: test
num_bytes: 6967375359.628
num_examples: 2382
- name: verified_test
num_bytes: 1182628989.0
num_examples: 408
download_size: 118074473123
dataset_size: 148612623979.511
- config_name: ted
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speaker
dtype: string
- name: transcript
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 46573026086.984
num_examples: 3384
- name: validation
num_bytes: 5694199931.0
num_examples: 425
- name: test
num_bytes: 5959094411.0
num_examples: 423
download_size: 58384489268
dataset_size: 58226320428.984
- config_name: vp_nel
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speaker_id
dtype: string
- name: text
dtype: string
- name: word_timestamps
sequence:
- name: word
dtype: string
- name: start_sec
dtype: float64
- name: end_sec
dtype: float64
- name: ne_timestamps
sequence:
- name: ne_label
dtype: string
- name: start_char_idx
dtype: int32
- name: char_offset
dtype: int32
- name: start_sec
dtype: float64
- name: end_sec
dtype: float64
splits:
- name: validation
num_bytes: 83371882.75
num_examples: 1750
- name: test
num_bytes: 85222143.142
num_examples: 1838
download_size: 165119242
dataset_size: 168594025.89200002
configs:
- config_name: hvb
data_files:
- split: train
path: hvb/train-*
- split: validation
path: hvb/validation-*
- split: test
path: hvb/test-*
- config_name: sqa5
data_files:
- split: train
path: sqa5/train-*
- split: validation
path: sqa5/validation-*
- split: test
path: sqa5/test-*
- split: verified_test
path: sqa5/verified_test-*
- config_name: ted
data_files:
- split: train
path: ted/train-*
- split: validation
path: ted/validation-*
- split: test
path: ted/test-*
- config_name: vp_nel
data_files:
- split: validation
path: vp_nel/validation-*
- split: test
path: vp_nel/test-*
---
### Dataset description
- **(Jan. 8 2024) Test set labels are released**
- **Toolkit Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/)
- **Paper:** [https://arxiv.org/abs/2212.10525](https://arxiv.org/abs/2212.10525)
### Licensing Information
#### SLUE-HVB
SLUE-HVB dataset contains a subset of the Gridspace-Stanford Harper Valley speech dataset and the copyright of this subset remains the same with the original license, CC-BY-4.0. See also original license notice (https://github.com/cricketclub/gridspace-stanford-harper-valley/blob/master/LICENSE)
Additionally, we provide dialog act classification annotation and it is covered with the same license as CC-BY-4.0.
#### SLUE-SQA-5
SLUE-SQA-5 Dataset contains question texts and answer strings (question_text, normalized_question_text, and answer_spans column in .tsv files) from these datasets,
* SQuAD1.1 (for questions whose question_id starts with ‘squad-’)
* Natural Questions (for questions whose question_id starts with ‘nq-’)
* WebQuestions (for questions whose question_id starts with ‘wq-’)
* CuratedTREC (for questions whose question_id starts with ‘trec-’)
* TriviaQA (for questions whose question_id starts with ‘triviaqa-’)
Additionally, we provide audio recordings (.wav files in “question” directories) of these questions.
For questions from TriviaQA (questions whose question_id starts with ‘triviaqa-’), their question texts, answer strings, and audio recordings are licensed with the same Apache License 2.0 as TriviaQA (for more detail, please refer to https://github.com/mandarjoshi90/triviaqa/blob/master/LICENSE).
For questions from the other 4 datasets, their question texts, answer strings, and audio recordings are licensed with Creative Commons Attribution-ShareAlike 4.0 International license.
SLUE-SQA-5 also contains a subset of Spoken Wikipedia, including the audios placed in “document” directories and their transcripts (document_text and normalized_document_text column in .tsv files). Additionally, we provide the text-to-speech alignments (.txt files in “word2time” directories).These contents are licensed with the same Creative Commons (CC BY-SA 4.0) license as Spoken Wikipedia.
#### SLUE-TED
SLUE-TED Dataset contains TED Talk audios along with the associated abstracts and title, which were concatenated to create reference summaries. This corpus is licensed with the same Creative Commons (CC BY–NC–ND 4.0 International) license as TED talks. For further information, please refer to the details provided below.
=============================
TED.com
We encourage you to share TED Talks, under our Creative Commons license, or ( CC BY–NC–ND 4.0 International, which means it may be shared under the conditions below:
CC: means the type of license rights associated with TED Talks, or Creative Commons
BY: means the requirement to include an attribution to TED as the owner of the TED Talk and include a link to the talk, but do not include any other TED branding on your website or platform, or language that may imply an endorsement.
NC: means you cannot use TED Talks in any commercial context or to gain any type of revenue, payment or fee from the license sublicense, access or usage of TED Talks in an app of any kind for any advertising, or in exchange for payment of any kind, including in any ad supported content or format.
ND: means that no derivative works are permitted so you cannot edit, remix, create, modify or alter the form of the TED Talks in any way. This includes using the TED Talks as the basis for another work, including dubbing, voice-overs, or other translations not authorized by TED. You may not add any more restrictions that we have placed on the TED site content, such as additional legal or technological restrictions on accessing the content.
|
d0rj/gsm8k-ru | ---
annotations_creators:
- crowdsourced
language_creators:
- translated
language:
- ru
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- gsm8k
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: gsm8k
pretty_name: Grade School Math 8K (ru)
tags:
- math-word-problems
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 6815618.0
num_examples: 7473
- name: test
num_bytes: 1234140.0
num_examples: 1319
download_size: 3883654
dataset_size: 8049758.0
---
# gsm8k-ru
Translated version of [gsm8k](https://huggingface.co/datasets/gsm8k) dataset into Russian.
## Dataset Description
- **Homepage:** https://openai.com/blog/grade-school-math/
- **Repository:** https://github.com/openai/grade-school-math
- **Paper:** https://arxiv.org/abs/2110.14168 |
Universal-NER/Pile-NER-definition | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Intro
Pile-NER-definition is a set of GPT-generated data for named entity recognition using the definition-based data construction prompt. It was collected by prompting gpt-3.5-turbo-0301 and augmented by negative sampling. Check our [project page](https://universal-ner.github.io/) for more information.
# License
Attribution-NonCommercial 4.0 International |
Kabatubare/medical | ---
tags:
- healthcare
- qna
- nlp
- english
license: other
language:
- en
pretty_name: Medical QnA Datasets
---
# Dataset Card for "Medical" Healthcare QnA Datasets
## Dataset Details
### Dataset Description
The "Medical" dataset is a specialized subset curated from the larger MedDialog collection, featuring healthcare dialogues between doctors and patients. This dataset focuses on conversations from Icliniq, HealthcareMagic, and HealthTap. Written primarily in English, it is designed to serve a broad range of applications such as NLP research, healthcare chatbot development, and medical information retrieval. The dataset contains 24,000 rows.
- **Data Sources**: Curated from MedDialog, focusing on Icliniq, HealthcareMagic, and HealthTap
- **Size**: 24,000 rows
- **Language**: English
### Direct Uses:
- NLP research in healthcare dialogues
- Development of healthcare question-answering systems
- Medical information retrieval
### Limitations and Recommendations:
- Not a substitute for certified medical advice
- Exercise caution in critical healthcare applications
|
bclavie/mmarco-japanese-hard-negatives | ---
language:
- ja
task_categories:
- text-retrieval
dataset_info:
features:
- name: query
dtype: string
- name: positives
sequence: string
- name: negatives
sequence: string
- name: bm25_negatives
sequence: string
- name: original_negatives
sequence: string
splits:
- name: train
num_bytes: 24494938913
num_examples: 391061
download_size: 11664534369
dataset_size: 24494938913
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
[Under Construction]
This is a repository containing all the queries from the Japanese part of the MMarco dataset, the multilingual version of the MSMarco dataset.
For each query, there are matching hard negatives:
- 25 of them retrieved by the multilingual e5 base model.
- Up to 10 of them retrieved by the basic implementation of BM25 from Japanese in the Anserini library. |
LanguageBind/Video-LLaVA | ---
license: mit
---
|
jhu-clsp/FollowIR-train | ---
license: apache-2.0
language:
- en
tags:
- retrieval
- information retrieval
pretty_name: FollowIR-train
size_categories:
- 1K<n<10K
---
# Dataset Summary
FollowIR-train contains ~1800 query and instruction pairs, with labels for relevance (true or false). It can be used to train retrieval models to better follow instructions (see [FollowIR-7B](https://huggingface.co/jhu-clsp/FollowIR-7B)).
The dataset was created by taking instruction and query pairs from all [TREC tracks](https://trec.nist.gov/) (which provides instructions as "narratives") from 1993-on that provided these instructions. Synthetic documents were then created from GPT-3.5-Turbo-1106 and filtered using Mistral-Instruct-7B-v0.2. This dataset contains the filtered instructions only. See [jhu-clsp/FollowIR-train-raw]() for the raw data before filtering.
- **Repository:** [orionw/FollowIR](https://github.com/orionw/FollowIR)
- **Paper:** https://arxiv.org/abs/2403.15246
- **Model Trained on the Dataset:** [jhu-clsp/FollowIR-7B](https://huggingface.co/jhu-clsp/FollowIR-7B/)
The structure of the dataset is as follows:
```
{
"score": the score from Mistral-Instruct-7B-v0.2 of whether it was relevant or not (1 is relevant, 0 is not)
"label": the label of relevance from GPT-3.5-Turbo-1106 who created the document
"id": the id from the original TREC track and the file it came from
"document": the synthetic document produced by GPT-3.5-Turbo-1106 given the original instruction, query, and label
"query": the query written by TREC
"instruction": the instruction (or narrative) written by TREC for human annotation
}
```
# Citation
```bibtex
@misc{weller2024followir,
title={FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions},
author={Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn Lawrie and Luca Soldaini},
year={2024},
eprint={2403.15246},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
Alvenir/alvenir_asr_da_eval | ---
license: cc-by-4.0
---
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Prompts/sentence selection](#prompts/sentence-selection)
- [Recording](#recording)
- [Evaluation](#evaluation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://alvenir.ai
- **Repository:** https://github.com/danspeech/alvenir-asr-da-eval/
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
`
{'audio': {'path': `some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
`
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
|
PiC/phrase_sense_disambiguation | ---
annotations_creators:
- expert-generated
language_creators:
- found
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: phrase-in-context
pretty_name: 'PiC: Phrase Sense Disambiguation'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids: []
---
# Dataset Card for "PiC: Phrase Sense Disambiguation"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/)
- **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>)
- **Size of downloaded dataset files:** 49.95 MB
- **Size of the generated dataset:** 43.26 MB
- **Total amount of disk used:** 93.20 MB
### Dataset Summary
PSD is a phrase retrieval task like PR-pass and PR-page but more challenging since each example contains two short paragraphs (~11 sentences each) which trigger different senses of the same phrase.
The goal is to find the instance of the target phrase **t** that is semantically similar to a paraphrase **q**.
The dataset is split into 5,150/3,000/20,002 for test/dev/train, respectively.
<p align="center">
<img src="https://auburn.edu/~tmp0038/PiC/psd_sample.png" alt="PSD sample" style="width:100%; border:0;">
</p>
Given document D, trained Longformer-large model correctly retrieves <span style="background-color: #ef8783">massive figure</span> in the second paragraph for the query Q<sub>2</sub> "giant number" but **fails** to retrieve the answer when the query Q<sub>1</sub> is "huge model".
The correct answer for Q<sub>1</sub> should be <span style="background-color: #a1fb8e">massive figure</span> in the first passage since this phrase relates to a model rather than a number.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
**PSD**
* Size of downloaded dataset files: 49.95 MB
* Size of the generated dataset: 43.26 MB
* Total amount of disk used: 93.20 MB
An example of 'test' looks as follows.
```
{
"id": "297-1",
"title": "https://en.wikipedia.org/wiki?curid=2226019,https://en.wikipedia.org/wiki?curid=1191780",
"context": "In addition, the results from the study did not support the idea of females preferring complexity over simplicity in song sequences. These findings differ from past examinations, like the 2008 Morisake et al. study that suggested evidence of female Bengalese finches preferring complex songs over simple ones. Evolutionary adaptations of specifically complex song production in relation to female preference in Bengalese finches continues to be a topic worth examining. Comparison with zebra finches. Bengalese finches and zebra finches are members of the estrildiae family and are age-limited learners when it comes to song learning and the acoustic characteristics of their songs (Peng et al., 2012). Both of these species have been widely used in song learning based animal behavior research and although they share many characteristics researchers have been able to determine stark differences between the two. Previous to research done in 1987, it was thought that song learning in Bengalese finches was similar to zebra finches but there was no research to support this idea. Both species require learning from an adult during a sensitive juvenile phase in order to learn the species specific and sexually dimorphic songs. This tutor can be the father of the young or other adult males that are present around the juvenile. Clayton aimed to directly compare the song learning ability of both of these species to determine if they have separate learning behaviors. Many students find they can not possibly complete all the work assigned them; they learn to neglect some of it. Some student groups maintain files of past examinations which only worsen this situation. The difference between the formal and real requirements produced considerable dissonance among the students and resulted in cynicism, scorn, and hypocrisy among students, and particular difficulty for minority students. No part of the university community, writes Snyder, neither the professors, the administration nor the students, desires the end result created by this process. The \"Saturday Review\" said the book \"will gain recognition as one of the more cogent 'college unrest' books\" and that it presents a \"most provocative thesis.\" The book has been cited many times in studies. References. [[Category:Curricula]] [[Category:Philosophy of education]] [[Category:Massachusetts Institute of Technology]] [[Category:Books about social psychology]] [[Category:Student culture]] [[Category:Books about education]] [[Category:1970 non-fiction books]]",
"query": "previous exams",
"answers": {
"text": ["past examinations"],
"answer_start": [1621]
}
}
```
### Data Fields
The data fields are the same among all subsets and splits.
* id: a string feature.
* title: a string feature.
* context: a string feature.
* question: a string feature.
* answers: a dictionary feature containing:
* text: a list of string features.
* answer_start: a list of int32 features.
### Data Splits
| name |train|validation|test|
|--------------------|----:|---------:|---:|
|PSD |20002| 3000|5000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com).
#### Who are the source language producers?
We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
13 linguistic experts from [Upwork.com](https://upwork.com).
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is a joint work between Adobe Research and Auburn University.
Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me).
[@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace.
### Licensing Information
This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
}
``` |
embedding-data/WikiAnswers | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/WikiAnswers
pretty_name: WikiAnswers
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "WikiAnswers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/afader/oqa#wikianswers-corpus](https://github.com/afader/oqa#wikianswers-corpus)
- **Repository:** [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
- **Paper:** [More Information Needed](https://doi.org/10.1145/2623330.2623677)
- **Point of Contact:** [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129)
### Dataset Summary
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: N
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the source language producers?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the annotators?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Personal and Sensitive Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Discussion of Biases
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Other Known Limitations
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Licensing Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Citation Information
```
@inproceedings{Fader14,
author = {Anthony Fader and Luke Zettlemoyer and Oren Etzioni},
title = {{Open Question Answering Over Curated and Extracted
Knowledge Bases}},
booktitle = {KDD},
year = {2014}
}
```
### Contributions
|
biglam/hansard_speech | ---
annotations_creators:
- no-annotation
language:
- 'en'
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Hansard Speeches
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- speeches
- politics
- parliament
- British
task_categories:
- text-classification
- text-generation
task_ids:
- multi-class-classification
- language-modeling
- masked-language-modeling
---
# Dataset Card for Hansard speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://evanodell.com/projects/datasets/hansard-data/
- **Repository:** https://github.com/evanodell/hansard-data3
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Evan Odell](https://github.com/evanodell)
### Dataset Summary
A dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage
> Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented "as is".
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types
- `language-modeling`: This dataset can contribute to the training or the evaluation of language models for historical texts.
### Languages
`en:GB`
## Dataset Structure
### Data Instances
```
{
'id': 'uk.org.publicwhip/debate/1979-05-17a.390.0',
'speech': "Since the Minister for Consumer Affairs said earlier that the bread price rise would be allowed, in view of developing unemployment in the baking industry, and since the Mother's Pride bakery in my constituency is about to close, will the right hon. Gentleman give us a firm assurance that there will be an early debate on the future of the industry, so that the Government may announce that, thanks to the price rise, those workers will not now be put out of work?",
'display_as': 'Eric Heffer',
'party': 'Labour',
'constituency': 'Liverpool, Walton',
'mnis_id': '725',
'date': '1979-05-17',
'time': '',
'colnum': '390',
'speech_class': 'Speech',
'major_heading': 'BUSINESS OF THE HOUSE',
'minor_heading': '',
'oral_heading': '',
'year': '1979',
'hansard_membership_id': '5612',
'speakerid': 'uk.org.publicwhip/member/11615',
'person_id': '',
'speakername': 'Mr. Heffer',
'url': '',
'government_posts': [],
'opposition_posts': [],
'parliamentary_posts': ['Member, Labour Party National Executive Committee']
}
```
### Data Fields
|Variable|Description|
|---|---|
|id|The ID as assigned by mysociety|
|speech|The text of the speech|
|display_as| The standardised name of the MP.|
|party|The party an MP is member of at time of speech|
|constituency| Constituency represented by MP at time of speech|
|mnis_id| The MP's Members Name Information Service number|
|date|Date of speech|
|time|Time of speech|
|colnum |Column number in hansard record|
|speech_class |Type of speech|
|major_heading| Major debate heading|
|minor_heading| Minor debate heading|
|oral_heading| Oral debate heading|
|year |Year of speech|
|hansard_membership_id| ID used by mysociety|
|speakerid |ID used by mysociety|
|person_id |ID used by mysociety|
|speakername| MP name as appeared in Hansard record for speech|
|url| link to speech|
|government_posts| Government posts held by MP (list)|
|opposition_posts |Opposition posts held by MP (list)|
|parliamentary_posts| Parliamentary posts held by MP (list)|
### Data Splits
Train: 2694375
## Dataset Creation
### Curation Rationale
This dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution.
### Source Data
#### Initial Data Collection and Normalization
The dataset is created by getting the data from [data.parliament.uk](http://data.parliament.uk/membersdataplatform/memberquery.aspx). There is no normalization.
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
None
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
This is public information, so there should not be any personal and sensitive information
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to understand how language use and society's views have changed over time.
### Discussion of Biases
Because of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This dataset was built on top of [parlparse](https://github.com/mysociety/parlparse) by [Evan Odell](https://github.com/evanodell)
### Licensing Information
Creative Commons Attribution 4.0 International License
### Citation Information
```
@misc{odell, evan_2021,
title={Hansard Speeches 1979-2021: Version 3.1.0},
DOI={10.5281/zenodo.4843485},
abstractNote={<p>Full details are available at <a href="https://evanodell.com/projects/datasets/hansard-data">https://evanodell.com/projects/datasets/hansard-data</a></p> <p><strong>Version 3.1.0 contains the following changes:</strong></p> <p>- Coverage up to the end of April 2021</p>},
note={This release is an update of previously released datasets. See full documentation for details.},
publisher={Zenodo},
author={Odell, Evan},
year={2021},
month={May} }
```
Thanks to [@shamikbose](https://github.com/shamikbose) for adding this dataset. |
bigbio/n2c2_2018_track2 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2018 ADE
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for n2c2 2018 ADE
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,RE
The National NLP Clinical Challenges (n2c2), organized in 2018, continued the
legacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2
new sets of data to the shared tasks organized since 2006. Track 2 of 2018
n2c2 shared tasks focused on the extraction of medications, with their signature
information, and adverse drug events (ADEs) from clinical narratives.
This track built on our previous medication challenge, but added a special focus on ADEs.
ADEs are injuries resulting from a medical intervention related to a drugs and
can include allergic reactions, drug interactions, overdoses, and medication errors.
Collectively, ADEs are estimated to account for 30% of all hospital adverse
events; however, ADEs are preventable. Identifying potential drug interactions,
overdoses, allergies, and errors at the point of care and alerting the caregivers of
potential ADEs can improve health delivery, reduce the risk of ADEs, and improve health
outcomes.
A step in this direction requires processing narratives of clinical records
that often elaborate on the medications given to a patient, as well as the known
allergies, reactions, and adverse events of the patient. Extraction of this information
from narratives complements the structured medication information that can be
obtained from prescriptions, allowing a more thorough assessment of potential ADEs
before they happen.
The 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,
tackled these natural language processing tasks in 3 different steps,
which we refer to as tasks:
1. Concept Extraction: identification of concepts related to medications,
their signature information, and ADEs
2. Relation Classification: linking the previously mentioned concepts to
their medication by identifying relations on gold standard concepts
3. End-to-End: building end-to-end systems that process raw narrative text
to discover concepts and find relations of those concepts to their medications
Shared tasks provide a venue for head-to-head comparison of systems developed
for the same task and on the same data, allowing researchers to identify the state
of the art in a particular task, learn from it, and build on it.
## Citation Information
```
@article{DBLP:journals/jamia/HenryBFSU20,
author = {
Sam Henry and
Kevin Buchan and
Michele Filannino and
Amber Stubbs and
Ozlem Uzuner
},
title = {2018 n2c2 shared task on adverse drug events and medication extraction
in electronic health records},
journal = {J. Am. Medical Informatics Assoc.},
volume = {27},
number = {1},
pages = {3--12},
year = {2020},
url = {https://doi.org/10.1093/jamia/ocz166},
doi = {10.1093/jamia/ocz166},
timestamp = {Sat, 30 May 2020 19:53:56 +0200},
biburl = {https://dblp.org/rec/journals/jamia/HenryBFSU20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Norod78/microsoft-fluentui-emoji-512-whitebg | ---
language: en
license: mit
size_categories:
- n<10K
task_categories:
- unconditional-image-generation
- text-to-image
pretty_name: Microsoft FluentUI Emoji 512x512 White Background
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 329173985.708
num_examples: 7564
download_size: 338676474
dataset_size: 329173985.708
tags:
- emoji
- fluentui
---
# Dataset Card for "microsoft-fluentui-emoji-512-whitebg"
[svg and their file names were converted to images and text from Microsoft's fluentui-emoji repo](https://github.com/microsoft/fluentui-emoji) |
keremberke/pcb-defect-segmentation | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/pcb-defect-segmentation" src="https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['dry_joint', 'incorrect_installation', 'pcb_damage', 'short_circuit']
```
### Number of Images
```json
{'valid': 25, 'train': 128, 'test': 36}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pcb-defect-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8](https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8?ref=roboflow2huggingface)
### Citation
```
@misc{ defects-2q87r_dataset,
title = { Defects Dataset },
type = { Open Source Dataset },
author = { Diplom },
howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } },
url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 27, 2023 at 1:45 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 189 images.
Defect are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
|
tsdocode/vi_alpaca_clean | ---
license: cc-by-4.0
language:
- vi
tags:
- instruction-finetuning
pretty_name: Vietnamese Alpaca
task_categories:
- text-generation
---
|
cosimoiaia/Loquace-102k | ---
license: cc-by-nc-2.0
task_categories:
- conversational
language:
- it
tags:
- llama
- instruction-tuning
size_categories:
- 100K<n<1M
--- |
iqballx/indonesian_news_datasets | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
- summarization
language:
- id
---
# Indonesian News Datasets
The dataset compiles information from seven prominent Indonesian news platforms: **Tempo, CNN Indonesia, CNBC Indonesia, Okezone, Suara, Kumparan, and JawaPos**. Each source contributes a diverse range of articles, collectively forming a comprehensive repository of Indonesian news content. The dataset consists of diverse columns: 'id' denotes the unique identifier for each news article, 'title' contains the headline, 'image' refers to the associated visual content, 'url' specifies the source link, 'content' encompasses the full text of the news article, 'embedding' houses the text embeddings extracted using the OpenAI `text-embedding-ada-002 model`, 'created_at' marks the creation timestamp, 'updated_at' signifies the last update timestamp, and 'summary' encapsulates the concise article summary generated via the ChatGPT API. These columns collectively represent a comprehensive set of information for each news article, facilitating various natural language processing tasks and analysis while providing enriched contextual details for effective exploration and modeling. |
argilla/ultrafeedback-critique | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: critique
dtype: string
- name: overall_score
dtype: float64
splits:
- name: train
num_bytes: 639775765
num_examples: 252984
download_size: 253016899
dataset_size: 639775765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# UltraFeedback - Critique (raw)
This dataset contains a raw version of [ultrafeedback-curated](https://huggingface.co/datasets/argilla/ultrafeedback-curated) prepared to train a critique model.
## Introduction
This dataset is built so that each record contains the original instruction, the response from each one of the models, the corresponding critique and it's overall score. We have prepared the dataset so that applying Supervised fine-tuning (SFT) for a critique model can be as simple as possible.
## Data processing
We extracted each `instruction`, for each model completion in the original `completions` column we obtained the model's `response`, the `critique` rationale and the `overall_score`. Exploring the dataset we noticed that for some records, the `overall_score` was always low, which indicated that the instruction doesn't make sense, so we decided to remove those records for which all `overall_score` responses were below a given threshold (`overall_score_threshold` == 4). The final dataset has 252984 records, around 720 instructions of the original dataset were removed.
## Reproduce
<a target="_blank" href="https://colab.research.google.com/drive/1ggdYCjDALk9ZI9z7j_dLDapabt6AznQn?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
To reproduce the processing of the dataset feel free to run the attached Colab Notebook or just view it at [`notebook.ipynb`](./notebook.ipynb) within this repository.
From Argilla we encourage anyone out there to play around, investigate, and experiment with the data, and we firmly believe on open sourcing what we do, as
ourselves, as well as the whole community, benefit a lot from open source and we also want to give back.
Example record:
```json
{
"instruction": "I want to do exploratory data analysis on below dataset found on kaggle\n\nhttps://www.kaggle.com/datasets/neuromusic/avocado-prices",
"response": "Hello! I'm here to help you with your exploratory data analysis on the avocado prices dataset from Kaggle. This dataset contains information on avocado prices from different regions in the United States, including California, Florida, and Illinois.\n\nBefore we dive into the analysis, I want to make sure that you are aware of the potential biases and limitations of the dataset. The dataset is collected from a single source, and it may not be representative of the entire avocado market or the entire United States. Additionally, the dataset only contains information on three regions, which may not be enough to make generalizations about the entire country.\n\nThat being said, there are many interesting patterns and trends that we can explore in this dataset. For example, we can analyze the price trends over time to see if there are any seasonal patterns or long-term trends. We can also compare the prices across different regions to see if there are any differences in pricing strategies or market conditions.\n\nTo get started, I recommend that we first inspect the dataset to get a better understanding of the variables and their distributions. We can use statistical methods like histograms and box plots to visualize the data and identify any outliers or anomalies.\n\nOnce we have a better understanding of the data, we can move on to more advanced analytical techniques like regression analysis or time series analysis. These methods can help us identify the factors that influence avocado prices and make predictions about future price trends.\n\nThroughout the analysis, I will make sure to provide you with accurate and unbiased information, and I will always be mindful of any ethical or social implications of our findings. I'm excited to work with you on this project and help you gain insights into the avocado market!\n\nIs there anything specific you would like to explore in the dataset, or any questions you have before we begin?",
"critique": "Your answer was generally good, covering a broad range of points. You explained well about the dataset and potential biases, which shows a good understanding. However, there are a few areas for improvement. \n\nFirst, you mentioned that the dataset only contains information on three regions - California, Florida, and Illinois. This is not accurate. The dataset includes multiple regions. You need to ensure your statements are factual.\n\nSecond, you should have asked the user what kind of exploratory data analysis they want to perform or suggested some specific analyses that could be done, such as looking for correlations between variables, instead of going straight into advanced techniques like regression analysis or time series analysis which might not be necessary at this stage.\n\nLastly, the user didn't ask for a detailed explanation of how to do the analysis. The instruction was to provide a response as an AI assistant, so your role was to provide information or suggestions, not to guide the user through the whole process.",
"overall_score": 6.5
}
```
## Citation
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset: https://huggingface.co/datasets/openbmb/UltraFeedback
|
grimulkan/PIPPA-augmented-dedup | ---
license: unknown
tags:
- not-for-all-audiences
---
An augmented and further cleaned version of [PIPPA-shareGPT](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT) (specifically `pippa_sharegpt_trimmed.jsonl`, drawn from [PygmalianAI's PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA)) in Fastchat format, modified in the following ways:
- The first prompt is modified to add context and simple references to aspects of the conversation (OOC, use of emojis, content).
- All {name} and {char} replaced by actual names and characters randomly generated by [Faker](https://pypi.org/project/Faker/).
- Very short conversations (<50 tokens) removed.
- Further de-duplicated, keeping the longest unique conversation.
- Conversations were made to be alternating (user/assistant), always starting with the user, and ending with the assistant. |
jon-tow/open-english-wordnet-synset-2023 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files: open_english_wordnet_2023.jsonl
---
# open-english-wordnet-synset-2023
Open English WordNet (2023)
## Dataset Details
### Dataset Description
Open English WordNet is a lexical network of the English language grouping words into synsets and linking them according to relationships such as hypernymy, antonymy and meronymy. It is intended to be used in natural language processing applications and provides deep lexical information about the English language as a graph.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/globalwordnet/english-wordnet
- **Paper:** John P. McCrae, Alexandre Rademaker, Francis Bond, Ewa Rudnicka and Christiane Fellbaum (2019) [English WordNet 2019 – An Open-Source WordNet for English](https://aclanthology.org/2019.gwc-1.31/). In Proceedings of the 10th Global WordNet Conference – GWC 2019, Wrocław
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@inproceedings{mccrae-etal-2019-english,
title = "{E}nglish {W}ord{N}et 2019 {--} An Open-Source {W}ord{N}et for {E}nglish",
author = "McCrae, John P. and
Rademaker, Alexandre and
Bond, Francis and
Rudnicka, Ewa and
Fellbaum, Christiane",
editor = "Vossen, Piek and
Fellbaum, Christiane",
booktitle = "Proceedings of the 10th Global Wordnet Conference",
month = jul,
year = "2019",
address = "Wroclaw, Poland",
publisher = "Global Wordnet Association",
url = "https://aclanthology.org/2019.gwc-1.31",
pages = "245--252",
abstract = "We describe the release of a new wordnet for English based on the Princeton WordNet, but now developed under an open-source model. In particular, this version of WordNet, which we call English WordNet 2019, which has been developed by multiple people around the world through GitHub, fixes many errors in previous wordnets for English. We give some details of the changes that have been made in this version and give some perspectives about likely future changes that will be made as this project continues to evolve.",
}
``` |
UWV/Leesplank_NL_wikipedia_simplifications | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
- text-classification
- translation
- text2text-generation
language:
- nl
pretty_name: Leesplank wikipedia vereenvoudigingen
size_categories:
- 1M<n<10M
---
The set contains 2.87M pragraphs of prompt/result combinations, where the prompt is a paragraph from Dutch Wikipedia and the result is a simplified text, which could include more than one paragraph.
This dataset was created by UWV, as a part of project "Leesplank", an effort to generate datasets that are ethically and legally sound.
The basis of this dataset was the wikipedia extract as a part of Gigacorpus (http://gigacorpus.nl/). The lines were fed one by one into GPT 4 1106 preview, where the filtering level was the strictest.
This means that this dataset is cleaned by the [OpenAI filtering standards](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter).
The prompt used was:
"Simplify a Dutch paragraph directly into a single, clear, and engaging text suitable for adult readers that speak Dutch as a second language, using words from the 'basiswoordenlijst Amsterdamse kleuters.' Maintain direct quotes, simplify dialogue, explain cultural references, idioms, and technical terms naturally within the text. Adjust the order of information for improved simplicity, engagement, and readability. Attempt to not use any commas or diminutives."
The prompt was used as a system prompt.
Hyperparameters were: Temperature 0.5, top_P: 0.2
After generation, the resulting prompt/result combinations were filtered where items that had a ratio prompt/result length greater than 4 or smaller than 0.25 were deemed "bad" and are not included in the set.
Another round of processing may be required to filter out items that originated as map descriptions.
|
nvidia/OpenMath-MATH-masked | ---
license: other
license_name: nvidia-license
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- math
- nvidia
pretty_name: OpenMath MATH Masked
size_categories:
- 1K<n<10K
---
# OpenMath GSM8K Masked
We release a *masked* version of the [MATH](https://github.com/hendrycks/math) solutions.
This data can be used to aid synthetic generation of additional solutions for MATH dataset
as it is much less likely to lead to inconsistent reasoning compared to using
the original solutions directly.
This dataset was used to construct [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1):
a math instruction tuning dataset with 1.8M problem-solution pairs
generated using permissively licensed [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model.
For details of how the masked solutions were created, see our [paper](https://arxiv.org/abs/2402.10176).
You can re-create this dataset or apply similar techniques to mask solutions for other datasets
by using our [open-sourced code](https://github.com/Kipok/NeMo-Skills).
## Citation
If you find our work useful, please consider citing us!
```bibtex
@article{toshniwal2024openmath,
title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset},
author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman},
year = {2024},
journal = {arXiv preprint arXiv: Arxiv-2402.10176}
}
```
## License
The use of this dataset is governed by the [NVIDIA License](LICENSE) which permits commercial usage.
|
Crystalcareai/MoD-150k | ---
license: apache-2.0
datasets:
- jsonifize/Tested-188k-Python-Alpaca_stringified-jsonifize
- Norquinal/WizardLM_alpaca_claude_evol_instruct_70k
- allenai/ai2_arc
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- google/boolq
- LDJnr/Capybara
- mattpscott/airoboros-summarization
- Locutusque/Hercules-v1.0
- lmsys/lmsys-chat-1m
- Muennighoff/natural-instructions
- HuggingFaceH4/no_robots
- grimulkan/PIPPA-augmented-dedup
- euclaise/reddit-instruct
- teknium/OpenHermes-2.5
- ropes
- Open-Orca/SlimOrca-Dedup
- migtissera/Synthia-v1.3
- HuggingFaceH4/ultrachat_200k
- winogrande
- CollectiveCognition/chats-data-2023-09-22
- CollectiveCognition/chats-data-2023-09-27
- CollectiveCognition/chats-data-2023-10-16
- Locutusque/GPT4-LLM-Cleaned-chatml
- Locutusque/GPT4-roleplay-chatml
- Locutusque/GPT4-roleplay-v2-chatml
- Locutusque/WizardLM_evol_instruct_70k_chatml
- Locutusque/camel-chatml
- Locutusque/code-assistant-chatml
- Locutusque/code-assistant-v2-chatml
- Locutusque/dolphin-gpt4-chatml
- Locutusque/function-calling-chatml
- Locutusque/general-instruct-chatml
- Locutusque/lmsys-chat-1m-best
- Locutusque/medtext-chatml
- Locutusque/metamathqa-chatml
- Locutusque/platypus-chatml
- Locutusque/pubmedqa-chatml
- Locutusque/unnatural-instructions-chatml
---
## Introduction
I'm excited to share the MoD 150k subset, a selection from the broader Mixture of Data project I've been working on. This subset is crafted for those looking to fine-tune AI models for both Mixture of Experts (MoE) architectures and standard architectures, with a keen eye on accessibility for those with limited computational resources.
## My Experimentation
After diving deep into MoEs and conducting various experiments, I've found this 150k subset not only facilitates adaptation to MoE but also significantly benefits standard architectures. Running three epochs with a 7B parameter model on this dataset resulted in a diverse and effective model.
## The Dataset
Originally curated for MoE, its versatility has proven equally potent for standard model architectures. This subset, distilled from a vast array of sources, aims to foster innovation and exploration within our community for those without extensive compute resources..
## Acknowledgments
I'm grateful for the contributions from the community and the insights from various datasets and researchers. Their dedication has inspired this project, and I look forward to seeing how it is used and adapted.
Thank you for your support,
Lucas
datasets used:
- jsonifize/Tested-188k-Python-Alpaca_stringified-jsonifize
- Norquinal/WizardLM_alpaca_claude_evol_instruct_70k
- allenai/ai2_arc
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- google/boolq
- LDJnr/Capybara
- mattpscott/airoboros-summarization
- Locutusque/Hercules-v1.0
- lmsys/lmsys-chat-1m
- Muennighoff/natural-instructions
- HuggingFaceH4/no_robots
- grimulkan/PIPPA-augmented-dedup
- euclaise/reddit-instruct
- teknium/OpenHermes-2.5
- ropes
- Open-Orca/SlimOrca-Dedup
- migtissera/Synthia-v1.3
- HuggingFaceH4/ultrachat_200k
- winogrande
- CollectiveCognition/chats-data-2023-09-22
- CollectiveCognition/chats-data-2023-09-27
- CollectiveCognition/chats-data-2023-10-16
- Locutusque/GPT4-LLM-Cleaned-chatml
- Locutusque/GPT4-roleplay-chatml
- Locutusque/GPT4-roleplay-v2-chatml
- Locutusque/WizardLM_evol_instruct_70k_chatml
- Locutusque/camel-chatml
- Locutusque/code-assistant-chatml
- Locutusque/code-assistant-v2-chatml
- Locutusque/dolphin-gpt4-chatml
- Locutusque/function-calling-chatml
- Locutusque/general-instruct-chatml
- Locutusque/lmsys-chat-1m-best
- Locutusque/medtext-chatml
- Locutusque/metamathqa-chatml
- Locutusque/platypus-chatml
- Locutusque/pubmedqa-chatml
- Locutusque/unnatural-instructions-chatml
|
JailbreakV-28K/JailBreakV-28k | ---
license: mit
task_categories:
- text-generation
- question-answering
pretty_name: JailBreakV_28K
size_categories:
- 10K<n<100K
configs:
- config_name: JailBreakV_28K
data_files:
- split: mini_JailBreakV_28K
path: JailBreakV_28K/mini_JailBreakV_28K.csv
- split: JailBreakV_28K
path: JailBreakV_28K/JailBreakV_28K.csv
- config_name: RedTeam_2K
data_files:
- split: RedTeam_2K
path: JailBreakV_28K/RedTeam_2K.csv
---
# ⛓💥 JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks
[**🌐 GitHub**](https://github.com/EddyLuo1232/JailBreakV_28K) | [**🛎 Project Page**](https://eddyluo1232.github.io/JailBreakV28K/) | [**👉 Download full datasets**](https://drive.google.com/file/d/1ZrvSHklXiGYhpiVoxUH8FWc5k0fv2xVZ/view?usp=drive_link)
## If you like our project, please give us a star ⭐ on Hugging Face for the latest update.
## 📥 Using our dataset via huggingface Dataset
```python
from datasets import load_dataset
mini_JailBreakV_28K = load_dataset("JailbreakV-28K/JailBreakV-28k", 'JailBreakV_28K')["mini_JailBreakV_28K"]
JailBreakV_28K = load_dataset("JailbreakV-28K/JailBreakV-28k", 'JailBreakV_28K')["JailBreakV_28K"]
RedTeam_2K = load_dataset("JailbreakV-28K/JailBreakV-28k", 'RedTeam_2K')["RedTeam_2K"]
```
## 👻 An Example of Inference and Evaluation on Bunny-v1_0
```python
pip install -r requirement.txt
python attack.py --root JailBreakV_28K # use Bunny-v1_0 as case in this script
python evaluate_llamaguard_ours.py --data_path ./results/Bunny-v1_0/JailBreakV_28K.csv
```
## 😃 Dataset Details
JailBreakV_28K and mini_JailBreakV_28K datasets will comprise the following columns:
- **id**: Unique identifier for all samples.
- **jailbreak_query**: Jailbreak_query obtained by different jailbreak attacks.
- **redteam_query**: Harmful query from RedTeam_2K.
- **format**: Jailbreak attack method including template, persuade, logic, figstep, query-relevant.
- **policy**: The safety policy that redteam_query against.
- **image_path**: The file path of the image.
- **from**: The source of data.
- **selected_mini**: "True" if the data in mini_JailBreakV_28K dataset, otherwise "False".
- **transfer_from_llm**: "True" if the jailbreak_query is transferred from LLM jailbreak attacks, otherwise "False".
RedTeam_2K will comprise the following columns:
- **id**: Unique identifier for all samples.
- **question**: Harmful query.
- **policy**: the safety policy that redteam_query against.
- **from**: The source of data.
## 🚀 Data Composition
<!-- Provide a quick summary of the dataset. -->
**RedTeam-2K**: RedTeam-2K dataset, a meticulously curated collection of 2, 000 harmful queries aimed at identifying alignment vulnerabilities within LLMs and MLLMs. This dataset spans across 16 safety policies and incorporates queries from 8 distinct sources.
<img src="distribution_used.png" width="800"/>
**JailBreakV_28K**: JailBreakV-28K contains 28, 000 jailbreak text-image pairs, which include 20, 000 text-based LLM transfer jailbreak attacks and 8, 000 image-based MLLM jailbreak attacks. This dataset covers 16 safety policies and 5 diverse jailbreak methods.
## 🛠️ Dataset Overview
The RedTeam-2K dataset, is a meticulously curated collection of 2, 000 harmful queries aimed at identifying alignment vulnerabilities within LLMs and MLLMs. This dataset spans 16 safety policies and incorporates queries from 8 distinct sources, including GPT Rewrite, Handcraft, GPT Generate, LLM Jailbreak Study, AdvBench, BeaverTails, Question Set, and hh-rlhf of Anthropic.
Building upon the harmful query dataset provided by RedTeam-2K, JailBreakV-28K is designed as a comprehensive and diversified benchmark for evaluating the transferability of jailbreak attacks from LLMs to MLLMs, as well as assessing the alignment robustness of MLLMs against such attacks. Specifically, JailBreakV-28K contains 28, 000 jailbreak text-image pairs, which include 20, 000 text-based LLM transfer jailbreak attacks and 8, 000 image-based MLLM jailbreak attacks. This dataset covers 16 safety policies and 5 diverse jailbreak methods. The jailbreak methods are formed by 3 types of LLM transfer attacks that include Logic (Cognitive Overload), Persuade (Persuasive Adversarial Prompts), and Template (including both of Greedy Coordinate Gradient and handcrafted strategies), and 2 types of MLLM attacks including FigStep and Query-relevant attack. The JailBreakV-28K offers a broad spectrum of attack methodologies and integrates various image types like Nature, Random Noise, Typography, Stable Diffusion (SD), Blank, and SD+Typography Images. We believe JailBreakV-28K can serve as a comprehensive jailbreak benchmark for MLLMs.
<img src="dataset.png" width="800"/>
## 🏆 Mini-Leaderboard
| Model | Total ASR| Transfer Attack ASR|
|----------------------------|:---------:|:------------------:|
| OmniLMM-12B | 58.1 | 70.2 |
| InfiMM-Zephyr-7B | 52.9 | 73.0 |
| LLaMA-Adapter-v2 | 51.2 | 68.1 |
| LLaVA-1.5-13B | 51.0 | 65.5 |
| LLaVA-1.5-7B | 46.8 | 61.4 |
| InstructBLIP-13B | 45.2 | 55.5 |
| InternLM-XComposer2-VL-7B | 39.1 | 29.3 |
| Bunny-v1 | 38.0 | 49.5 |
| Qwen-VL-Chat | 33.7 | 41.2 |
| InstructBLIP-7B | 26.0 | 46.8 |
## ❌ Disclaimers
This dataset contains offensive content that may be disturbing, This benchmark is provided for educational and research purposes only.
## 📲 Contact
- Weidi Luo: luo.1455@osu.edu
- Siyuan Ma: siyuan.ma.pku@gmail.com
- Xiaogeng Liu: xiaogeng.liu@wisc.edu
- Chaowei Xiao: cxiao34@wisc.edu
## 📖 BibTeX:
```python
@misc{luo2024jailbreakv28k,
title={JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks},
author={Weidi Luo and Siyuan Ma and Xiaogeng Liu and Xiaoyu Guo and Chaowei Xiao},
year={2024},
eprint={2404.03027},
archivePrefix={arXiv},
primaryClass={cs.CR}
}
```
[More Information Needed] |
andstor/smart_contract_code_comments | ---
dataset_info:
features:
- name: contract_name
dtype: string
- name: file_path
dtype: string
- name: contract_address
dtype: string
- name: language
dtype: string
- name: class_name
dtype: string
- name: class_code
dtype: string
- name: class_documentation
dtype: string
- name: class_documentation_type
dtype: string
- name: func_name
dtype: string
- name: func_code
dtype: string
- name: func_documentation
dtype: string
- name: func_documentation_type
dtype: string
- name: compiler_version
dtype: string
- name: license_type
dtype: string
- name: swarm_source
dtype: string
- name: meta
struct:
- name: func_code_index
sequence: int64
- name: __index_level_0__
dtype: int64
config_name: data
splits:
- name: train
num_bytes: 11530607173
num_examples: 1267441
- name: test
num_bytes: 1306082431
num_examples: 143080
- name: validation
num_bytes: 1264266873
num_examples: 130849
download_size: 1995835391
dataset_size: 14100956477
paperswithcode_id: verified-smart-contract-code-comments
---
|
bigscience-data/roots_zh-cn_wikipedia | ---
language: zh
language_bcp47:
- zh-CN
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh-cn_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
|
juletxara/xquad_xtreme | ---
pretty_name: XQuAD-XTREME
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- es
- de
- el
- hi
- th
- ru
- tr
- ar
- vi
- zh
- ro
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: xquad
---
# Dataset Card for XQuAD-XTREME
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/xquad](https://github.com/deepmind/xquad)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 139.53 MB
- **Size of the generated dataset:** 18.09 MB
- **Total amount of disk used:** 157.62 MB
### Dataset Summary
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering
performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set
of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten language: Spanish, German,
Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi and Romanian. Consequently, the dataset is entirely parallel across 12 languages.
We also include "translate-train", "translate-dev", and "translate-test"
splits for each non-English language from XTREME (Hu et al., 2020). These can be used to run XQuAD in the "translate-train" or "translate-test" settings. https://proceedings.mlr.press/v119/hu20b/hu20b.pdf
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### ar
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.64 MB
- **Total amount of disk used:** 14.33 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### de
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.23 MB
- **Total amount of disk used:** 13.91 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### el
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 2.11 MB
- **Total amount of disk used:** 14.79 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### en
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.07 MB
- **Total amount of disk used:** 13.75 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
#### es
- **Size of downloaded dataset files:** 12.68 MB
- **Size of the generated dataset:** 1.22 MB
- **Total amount of disk used:** 13.90 MB
An example of 'test' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [527],
"text": ["136"]
},
"context": "\"Die Verteidigung der Panthers gab nur 308 Punkte ab und belegte den sechsten Platz in der Liga, während sie die NFL mit 24 Inte...",
"id": "56beb4343aeaaa14008c925c",
"question": "Wie viele Sacks erzielte Jared Allen in seiner Karriere?"
}
```
### Data Fields
The data fields are the same among all splits.
#### ar
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### de
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### el
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### en
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
#### es
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | validation |
| -------- | ---------: |
| ar | 1190 |
| de | 1190 |
| el | 1190 |
| en | 1190 |
| es | 1190 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
thennal/IMaSC | ---
annotations_creators:
- expert-generated
language:
- ml
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ICFOSS Malayalam Speech Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-to-speech
- automatic-speech-recognition
task_ids: []
---
# IMaSC: ICFOSS Malayalam Speech Corpus
**IMaSC** is a Malayalam text and speech corpus made available by [ICFOSS](https://icfoss.in/) for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
## Dataset Description
- **Paper:** [IMaSC — ICFOSS Malayalam Speech Corpus](https://arxiv.org/abs/2211.12796)
- **Point of Contact:** [Thennal D K](mailto:thennal10@gmail.com)
## Dataset Structure
The dataset consists of 34,473 instances with fields `text`, `speaker`, and `audio`. The audio is mono, sampled at 16kH. The transcription is normalized and only includes Malayalam characters and common punctuation. The table given below specifies how the 34,473 instances are split between the speakers, along with some basic speaker info:
| Speaker | Gender | Age | Time (HH:MM:SS) | Sentences |
| --- | --- | --- | --- | --- |
| Joji | Male | 28 | 06:08:55 | 4,332 |
| Sonia | Female | 43 | 05:22:39 | 4,294 |
| Jijo | Male | 26 | 05:34:05 | 4,093 |
| Greeshma | Female | 22 | 06:32:39 | 4,416 |
| Anil | Male | 48 | 05:58:34 | 4,239 |
| Vidhya | Female | 23 | 04:21:56 | 3,242 |
| Sonu | Male | 25 | 06:04:43 | 4,219 |
| Simla | Female | 24 | 09:34:21 | 5,638 |
| **Total** | | | **49:37:54** | **34,473** |
### Data Instances
An example instance is given below:
```json
{'text': 'സർവ്വകലാശാല വൈസ് ചാൻസലർ ഡോ. ചന്ദ്രബാബുവിനും സംഭവം തലവേദനയാവുകയാണ്',
'speaker': 'Sonia',
'audio': {'path': None,
'array': array([ 0.00921631, 0.00930786, 0.00939941, ..., -0.00497437,
-0.00497437, -0.00497437]),
'sampling_rate': 16000}}
```
### Data Fields
- **text** (str): Transcription of the audio file
- **speaker** (str): The name of the speaker
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio (always None)
### Data Splits
We provide all the data in a single `train` split. The loaded dataset object thus looks like this:
```json
DatasetDict({
train: Dataset({
features: ['text', 'speaker', 'audio'],
num_rows: 34473
})
})
```
### Dataset Creation
The text is sourced from [Malayalam Wikipedia](https://ml.wikipedia.org), and read by our speakers in studio conditions. Extensive error correction was conducted to provide a clean, accurate database. Further details are given in our paper, accessible at [https://arxiv.org/abs/2211.12796](https://arxiv.org/abs/2211.12796).
## Additional Information
### Licensing
The corpus is made available under the [Creative Commons license (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation
```
@misc{gopinath2022imasc,
title={IMaSC -- ICFOSS Malayalam Speech Corpus},
author={Deepa P Gopinath and Thennal D K and Vrinda V Nair and Swaraj K S and Sachin G},
year={2022},
eprint={2211.12796},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
vietgpt/wikipedia_en | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21102365479
num_examples: 6623239
download_size: 12161597141
dataset_size: 21102365479
task_categories:
- text-generation
language:
- en
tags:
- LM
size_categories:
- 1M<n<10M
---
# Wikipedia
- Source: https://huggingface.co/datasets/wikipedia
- Num examples: 6,623,239
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikipedia_en")
``` |
cyanic-selkie/aida-conll-yago-wikidata | ---
license: cc-by-sa-3.0
task_categories:
- token-classification
language:
- en
tags:
- wikidata
- wikipedia
- named-entity-recognition
- named-entity-linking
pretty_name: AIDA CoNLL-YAGO Wikidata
size_categories:
- 10K<n<100K
---
# Dataset Card for AIDA CoNLL-YAGO Wikidata
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [AIDA CoNLL-YAGO Wikidata repository](https://github.com/cyanic-selkie/aida-conll-yago-wikidata)
### Dataset Summary
The AIDA CoNLL-YAGO Wikidata dataset is the same as the original [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) dataset, but with Wikidata QIDs instead of Wikipedia titles as entity identifiers. They are automatically generated (with a few manual corrections) from Wikidata and Wikipedia dumps (March 1, 2023).
The code for generating the dataset can be found [here](https://github.com/cyanic-selkie/aida-conll-yago-wikidata).
### Supported Tasks
- `named-entity-recognition`: The dataset can be used to train a model for Named Entity Recognition.
- `named-entity-linking`: The dataset can be used to train a model for Named Entity Linking.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point represents a document (news article).
The `text` field contains the original text in an NFC normalized, UTF-8 encoded string.
The `entities` field contains a list of entities, each represented by a struct with the inclusive starting byte `start` field, exclusive ending byte `end` field, a nullable `qid` field, and a nullable `pageid` field.
Additionally, each document has a unique `document_id` field.
An example from the AIDA CoNLL-YAGO Wikidata test set looks as follows:
```
{
"document_id": 1214,
"text": "RADIO ROMANIA AFTERNOON HEALINES AT 4 PM . BUCHAREST 1996-12-06 Radio Romania news headlines : * The Democratic Convention signed an agreement on government and parliamentary support with its coalition partners the Social Democratic Union and the Hungarian Democratic Union ( UDMR ) . The ceremony was attended by President Emil Constantinescu . * The three parties in the government coalition have committed themselves to a real reform of Romania 's economy , Constantinescu said after the ceremony . * The UDMR wants to contribute to social reform and economic revival in Romania , union leader Marko Bela said . * The international airport in Timisoara and the domestic airports in Arad , Oradea and Sibiu were closed due to fog . -- Bucharest Newsroom 40-1 3120264",
"entities": [
{
"start": 0,
"end": 13,
"tag": "ORG",
"pageid": null,
"qid": null,
"title": null
},
{
"start": 43,
"end": 52,
"tag": "LOC",
"pageid": 36877,
"qid": 19660,
"title": "Bucharest"
},
{
"start": 64,
"end": 77,
"tag": "ORG",
"pageid": null,
"qid": null,
"title": null
},
{
"start": 101,
"end": 122,
"tag": "MISC",
"pageid": null,
"qid": null,
"title": null
},
{
"start": 215,
"end": 238,
"tag": "ORG",
"pageid": null,
"qid": null,
"title": null
},
{
"start": 247,
"end": 273,
"tag": "ORG",
"pageid": null,
"qid": null,
"title": null
},
{
"start": 276,
"end": 280,
"tag": "ORG",
"pageid": 49749134,
"qid": 266582,
"title": "Democratic_Union_of_Hungarians_in_Romania"
},
{
"start": 324,
"end": 343,
"tag": "PER",
"pageid": 393370,
"qid": 299152,
"title": "Emil_Constantinescu"
},
{
"start": 440,
"end": 447,
"tag": "LOC",
"pageid": 25445,
"qid": 218,
"title": "Romania"
},
{
"start": 461,
"end": 475,
"tag": "PER",
"pageid": 393370,
"qid": 299152,
"title": "Emil_Constantinescu"
},
{
"start": 508,
"end": 512,
"tag": "ORG",
"pageid": 49749134,
"qid": 266582,
"title": "Democratic_Union_of_Hungarians_in_Romania"
},
{
"start": 574,
"end": 581,
"tag": "LOC",
"pageid": 25445,
"qid": 218,
"title": "Romania"
},
{
"start": 597,
"end": 607,
"tag": "PER",
"pageid": 1219345,
"qid": 897108,
"title": "Béla_Markó"
},
{
"start": 646,
"end": 655,
"tag": "LOC",
"pageid": 33693389,
"qid": 83404,
"title": "Timişoara"
},
{
"start": 685,
"end": 689,
"tag": "LOC",
"pageid": 22537901,
"qid": 173591,
"title": "Arad,_Romania"
},
{
"start": 692,
"end": 698,
"tag": "LOC",
"pageid": 2024606,
"qid": 2102332,
"title": "Oradea_International_Airport"
},
{
"start": 703,
"end": 708,
"tag": "LOC",
"pageid": 2384413,
"qid": 946418,
"title": "Sibiu_International_Airport"
},
{
"start": 737,
"end": 755,
"tag": "ORG",
"pageid": null,
"qid": null,
"title": null
}
]
}
```
### Data Fields
- `document_id`: an integer that uniquely identifies the document this sentence belongs to
- `sentence_index`: an integer that uniquely identifies the position of the sentence in its original document
- `text`: an NFC normalized, UTF-8 encoded string representing the sentence
- `entities`: a list of structs representing entities, each entity has:
- `start`: an integer representing the inclusive starting UTF-8 code point of the entity
- `end`: an integer representing the exclusive ending UTF-8 code point of the entity
- `tag`: a string representing the entity type (PER, LOC, ORG or MISC)
- `qid`: an integer representing the Wikidata QID this entity refers to; it can be null if the entity didn't exist in Wikidata at the time of the creation of the original dataset
- `pageid`: an integer representing the English Wikipedia's pageID this entity refers to; it can be null if the entity didn't exist in Wikipedia at the time of the creation of the original dataset
- `title`: an NFC normalized, UTF-8 encoded string representing the English Wikipedia's title this entity refers to; it can be null if the entity didn't exist in Wikipedia at the time of the creation of the original dataset
### Data Splits
The data is split into training, validation and test sets; all of the sentences belonging to an article are in the same split. The final split sizes are as follows:
| | Train | Validation | Test |
| :----- | :------: | :-----: | :----: |
| AIDA CoNLL-YAGO Wikidata - documents | 946 | 216 | 231 |
| AIDA CoNLL-YAGO Wikidata - entities | 23,374 | 5,912 | 5,608 |
| AIDA CoNLL-YAGO Wikidata - entities with QIDs | 18,540 | 4,791 | 4,481 |
## Additional Information
### Licensing Information
The licensing status of the dataset is the same as the licensing status of the original [AIDA CoNLL-YAGO](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/ambiverse-nlu/aida/downloads) dataset which is under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). |
pszemraj/scientific_lay_summarisation-elife-norm | ---
license: mit
task_categories:
- summarization
- text2text-generation
language:
- en
size_categories:
- 10K<n<100K
source_datasets: tomasg25/scientific_lay_summarisation
---
# scientific_lay_summarisation - elife - normalized
This is the "_elife_" split. For more words, refer to the [PLOS split README](https://huggingface.co/datasets/pszemraj/scientific_lay_summarisation-plos-norm)
## Contents
load with datasets:
```python
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("pszemraj/scientific_lay_summarisation-elife-norm")
dataset
```
Output:
```python
DatasetDict({
train: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 4346
})
test: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
validation: Dataset({
features: ['article', 'summary', 'section_headings', 'keywords', 'year', 'title', 'article_length', 'summary_length'],
num_rows: 241
})
})
```
## Lengths
Train set:
![t5-tokens](https://i.imgur.com/8BQrbgs.png)
|
Francesco/road-signs-6ih4y | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': road-signs
'1': bus_stop
'2': do_not_enter
'3': do_not_stop
'4': do_not_turn_l
'5': do_not_turn_r
'6': do_not_u_turn
'7': enter_left_lane
'8': green_light
'9': left_right_lane
'10': no_parking
'11': parking
'12': ped_crossing
'13': ped_zebra_cross
'14': railway_crossing
'15': red_light
'16': stop
'17': t_intersection_l
'18': traffic_light
'19': u_turn
'20': warning
'21': yellow_light
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: road-signs-6ih4y
tags:
- rf100
---
# Dataset Card for road-signs-6ih4y
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/road-signs-6ih4y
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
road-signs-6ih4y
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/road-signs-6ih4y
### Citation Information
```
@misc{ road-signs-6ih4y,
title = { road signs 6ih4y Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/road-signs-6ih4y } },
url = { https://universe.roboflow.com/object-detection/road-signs-6ih4y },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
argilla/dolly-curated-comparison-falcon-7b-instruct | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response-1
dtype: string
- name: response-2
dtype: string
- name: category
dtype: string
- name: original_response
dtype: string
- name: external_id
dtype: int64
splits:
- name: train
num_bytes: 10328235
num_examples: 7401
download_size: 6598297
dataset_size: 10328235
---
# Dataset Card for "dolly-curated-comparison-falcon-7b-instruct"
This dataset contains two generated responses using the `falcon-7b-instruct` model and the original, curated, prompt + responses from the Dolly v2 curated dataset. For now only 50% of the original dataset is available but we plan to complete it.
This dataset can be used for training a reward model for RLHF using [Argilla Feedback](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/conceptual_guides.html)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shahules786/orca-best | ---
dataset_info:
features:
- name: cluster
struct:
- name: samples
list:
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: instruction
dtype: string
- name: num_samples
dtype: int64
splits:
- name: train
num_bytes: 900092818
num_examples: 328906
download_size: 462629849
dataset_size: 900092818
---
## Best of Orca
This is a filtered version of Orca GPT4 1M instructions. From repeated experiments and analysis, I came to the conclusion that original dataset
contains a lot of low-quality instructions which contributes to only poor generalization.
The solution I came up with is to filter the dataset and remove the unwanted samples. I applied two levels of filters
1. Removed instructions with less than 100 tokens in response.
2. Data deduplication grouped by instruction type using GTE embedding and cosine similarity (threshold>0.95)
After these two steps, the number of samples was reduced to 1/3rd of the original count.
For selecting a sample from each cluster, I tried different methods including random selection from a cluster.
We used this dataset to train multiple Open-Assistant models to confirm my hypothesis that data quality matter more than quantity.
This dataset was used in some of our models best models including https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10
⭐️ All models perform much better than models trained on full ORCA samples.
## Credits
* This wouldn't be possible without the amazing work of Eric in recreating the ORCA dataset. Check it out:
https://huggingface.co/datasets/ehartford/dolphin
* This dataset was created in association with the Open-Assistant team @jordanclive and @andreaskoepf
## Citations
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
``` |
dsfsi/vukuzenzele-monolingual | ---
language:
- eng
- afr
- nbl
- xho
- zul
- nso
- sep
- tsn
- ssw
- ven
- tso
license: cc-by-4.0
task_categories:
- translation
pretty_name: The Vuk'uzenzele South African Multilingual Corpus
tags:
- multilingual
- government
arxiv: 2303.0375
dataset_info:
- config_name: afr
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 462140
num_examples: 130
- name: test
num_bytes: 117811
num_examples: 28
- name: eval
num_bytes: 109553
num_examples: 29
download_size: 431879
dataset_size: 689504
- config_name: eng
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 369888
num_examples: 120
- name: test
num_bytes: 89637
num_examples: 26
- name: eval
num_bytes: 77360
num_examples: 26
download_size: 338733
dataset_size: 536885
- config_name: nbl
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 535653
num_examples: 132
- name: test
num_bytes: 112521
num_examples: 28
- name: eval
num_bytes: 125205
num_examples: 29
download_size: 494289
dataset_size: 773379
- config_name: nso
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 538443
num_examples: 128
- name: test
num_bytes: 129131
num_examples: 27
- name: eval
num_bytes: 114196
num_examples: 28
download_size: 452010
dataset_size: 781770
- config_name: sot
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 532606
num_examples: 131
- name: test
num_bytes: 113414
num_examples: 28
- name: eval
num_bytes: 118072
num_examples: 29
download_size: 453603
dataset_size: 764092
- config_name: ssw
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 526390
num_examples: 130
- name: test
num_bytes: 116446
num_examples: 28
- name: eval
num_bytes: 121511
num_examples: 29
download_size: 477822
dataset_size: 764347
- config_name: tsn
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 622646
num_examples: 128
- name: test
num_bytes: 121183
num_examples: 27
- name: eval
num_bytes: 127609
num_examples: 28
download_size: 496882
dataset_size: 871438
- config_name: tso
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 546021
num_examples: 128
- name: test
num_bytes: 120869
num_examples: 28
- name: eval
num_bytes: 98419
num_examples: 28
download_size: 446456
dataset_size: 765309
- config_name: ven
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 587325
num_examples: 128
- name: test
num_bytes: 127171
num_examples: 28
- name: eval
num_bytes: 109780
num_examples: 28
download_size: 461952
dataset_size: 824276
- config_name: xho
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 518328
num_examples: 130
- name: test
num_bytes: 120927
num_examples: 28
- name: eval
num_bytes: 113282
num_examples: 28
download_size: 478513
dataset_size: 752537
- config_name: zul
features:
- name: title
dtype: string
- name: author
dtype: string
- name: text
dtype: string
- name: edition
dtype: string
- name: language_code
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 520964
num_examples: 129
- name: test
num_bytes: 107058
num_examples: 28
- name: eval
num_bytes: 107359
num_examples: 28
download_size: 459835
dataset_size: 735381
configs:
- config_name: afr
data_files:
- split: train
path: afr/train-*
- split: test
path: afr/test-*
- split: eval
path: afr/eval-*
- config_name: eng
data_files:
- split: train
path: eng/train-*
- split: test
path: eng/test-*
- split: eval
path: eng/eval-*
- config_name: nbl
data_files:
- split: train
path: nbl/train-*
- split: test
path: nbl/test-*
- split: eval
path: nbl/eval-*
- config_name: nso
data_files:
- split: train
path: nso/train-*
- split: test
path: nso/test-*
- split: eval
path: nso/eval-*
- config_name: sot
data_files:
- split: train
path: sot/train-*
- split: test
path: sot/test-*
- split: eval
path: sot/eval-*
- config_name: ssw
data_files:
- split: train
path: ssw/train-*
- split: test
path: ssw/test-*
- split: eval
path: ssw/eval-*
- config_name: tsn
data_files:
- split: train
path: tsn/train-*
- split: test
path: tsn/test-*
- split: eval
path: tsn/eval-*
- config_name: tso
data_files:
- split: train
path: tso/train-*
- split: test
path: tso/test-*
- split: eval
path: tso/eval-*
- config_name: ven
data_files:
- split: train
path: ven/train-*
- split: test
path: ven/test-*
- split: eval
path: ven/eval-*
- config_name: xho
data_files:
- split: train
path: xho/train-*
- split: test
path: xho/test-*
- split: eval
path: xho/eval-*
- config_name: zul
data_files:
- split: train
path: zul/train-*
- split: test
path: zul/test-*
- split: eval
path: zul/eval-*
---
# The Vuk'uzenzele South African Multilingual Corpus
Give Feedback 📑: [DSFSI Resource Feedback Form](https://docs.google.com/forms/d/e/1FAIpQLSf7S36dyAUPx2egmXbFpnTBuzoRulhL5Elu-N1eoMhaO7v10w/formResponse)
## About Dataset
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/).
The original raw PDFs were obtatined from the [Vuk'uzenzele website](https://www.vukuzenzele.gov.za/).
The datasets contain government magazine editions in 11 languages, namely:
| Language | Code | Language | Code |
|------------|-------|------------|-------|
| English | (eng) | Sepedi | (nso) |
| Afrikaans | (afr) | Setswana | (tsn) |
| isiNdebele | (nbl) | Siswati | (ssw) |
| isiXhosa | (xho) | Tshivenda | (ven) |
| isiZulu | (zul) | Xitstonga | (tso) |
| Sesotho | (sot) |
**Note:** The languages use the ISO 639-2 language codes.
The data is split by language in JSONL format and each row is of the form:
```
{
"title": "Title for article",
"author": "Author Name or Vukuzenzele",
"text": "Article text",
"edition": "Linked Magazine edition",
"language_code": "ISO 639-2 language code"
}
```
## Disclaimer
This dataset contains machine-readable data extracted from PDF documents, from https://www.vukuzenzele.gov.za/, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Andani Madodonga
- Daniel Njini
- Richard Lastrucci
- Isheanesu Dzingirai
- Jenalea Rajab
## Citation
**Paper**
[Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/pdf/2303.03750)
> @inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
**Dataset**
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. **The Vuk'uzenzele South African Multilingual Corpus**, 2023
> @dataset{marivate_vukosi_2023_7598540,
author = {Marivate, Vukosi and
Njini, Daniel and
Madodonga, Andani and
Lastrucci, Richard and
Dzingirai, Isheanesu
Rajab, Jenalea},
title = {The Vuk'uzenzele South African Multilingual Corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7598539},
url = {https://doi.org/10.5281/zenodo.7598539}
}
Licences
-------
* License for Data - [CC 4.0 BY](LICENSE.data.md)
* Licence for Code - [MIT License](LICENSE.md)
|
ziozzang/EverythingLM-data-V2-Ko | ---
license: mit
language:
- ko
---
# Translated into Korean with DeepL
All Texts are translated with DeepL. (Machine Translated.)
- Issue: some data items are missing, cause of DeepL plan and processing method. I use very cheap plan and all datas are merged into single file and splitted by few code and hand.
- This is sample/test processing of data set creation with DeepL.
- Original Dataset: totally-not-an-llm/EverythingLM-data-V2
# EverythingLM V2 Dataset
**EverythingLM V2** is a diverse instruct dataset consisting of 1k of human-assistant conversations. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions.
### Differences for V1:
- All data in V2 is generated by GPT4
- Higher quality dataset generation pipeline:
- More humalike seed prompts
- Fixed some bugs in the script
- More diverse creative writing
- More diverse seed prompts in general
- Attempt not to overfit the model on complex instructions by occasionally skipping evol
### Cost:
Reproducing this dataset would cost roughly $40.
### Instruction Categories:
- Reasoning
- Creative Writing
- General Knowledge
- Brainstorming
- Search Query
- Coding
- Basic Instruct
We also leverage various system prompts for evol-instruct and for responding to prompts.
This dataset has also been filtered to remove OpenAI alignment.
### How it stands out:
- Long, detailed outputs
- Humanlike creativity
- CoT reasoning
- Complex & challenging tasks
### Plans:
- Train Llama 7b & 13b models (13b model V1 trained)
- Train Llama 70b QLoRA
- Generate V2 of the dataset, with more categories and GPT-4 (DONE) ✓
Included in this repo is the script to generate the dataset. |
allenai/scifact_entailment | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc-by-nc-2.0
multilinguality:
- monolingual
pretty_name: SciFact
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
paperswithcode_id: scifact
dataset_info:
features:
- name: claim_id
dtype: int32
- name: claim
dtype: string
- name: abstract_id
dtype: int32
- name: title
dtype: string
- name: abstract
sequence: string
- name: verdict
dtype: string
- name: evidence
sequence: int32
splits:
- name: train
num_bytes: 1649655
num_examples: 919
- name: validation
num_bytes: 605262
num_examples: 340
download_size: 3115079
dataset_size: 2254917
---
# Dataset Card for "scifact_entailment"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:** [https://scifact.apps.allenai.org/](https://scifact.apps.allenai.org/)
- **Repository:** <https://github.com/allenai/scifact>
- **Paper:** [Fact or Fiction: Verifying Scientific Claims](https://aclanthology.org/2020.emnlp-main.609/)
- **Point of Contact:** [David Wadden](mailto:davidw@allenai.org)
### Dataset Summary
SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
For more information on the dataset, see [allenai/scifact](https://huggingface.co/datasets/allenai/scifact).
This has the same data, but reformatted as an entailment task. A single instance includes a claim paired with a paper title and abstract, together with an entailment label and a list of evidence sentences (if any).
## Dataset Structure
### Data fields
- `claim_id`: An `int32` claim identifier.
- `claim`: A `string`.
- `abstract_id`: An `int32` abstract identifier.
- `title`: A `string`.
- `abstract`: A list of `strings`, one for each sentence in the abstract.
- `verdict`: The fact-checking verdict, a `string`.
- `evidence`: A list of sentences from the abstract which provide evidence for the verdict.
### Data Splits
| |train|validation|
|------|----:|---------:|
|claims| 919 | 340|
|