|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: template |
|
dtype: string |
|
- name: dataset |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 308250831 |
|
num_examples: 1223481 |
|
download_size: 129951272 |
|
dataset_size: 308250831 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
# Dataset Card for "ArabicDarija-xP3x" part of "xP3x" by [Muennighoff](https://huggingface.co/Muennighoff) |
|
|
|
## Find below part of the original dataset card |
|
## Dataset Description |
|
|
|
- **Repository:** https://github.com/bigscience-workshop/xmtf |
|
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) |
|
- **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com) |
|
|
|
### Dataset Summary |
|
|
|
> xP3x (Crosslingual Public Pool of Prompts eXtended) is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more! It is used for training future contenders of mT0 & BLOOMZ at project Aya @[C4AI](https://cohere.for.ai/) 🧡 |
|
> |
|
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3) together with the file in this repository named `xp3x_create.py`. We provide this version to save processing time. |
|
- **Languages:** 277 |
|
- **xP3 Dataset Family:** |
|
|
|
<table> |
|
<tr> |
|
<th>Name</th> |
|
<th>Explanation</th> |
|
<th>Example models</th> |
|
</tr> |
|
<tr> |
|
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> |
|
<td>Mixture of 17 tasks in 277 languages with English prompts</td> |
|
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> |
|
</tr> |
|
<tr> |
|
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> |
|
<td>Mixture of 13 training tasks in 46 languages with English prompts</td> |
|
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> |
|
</tr> |
|
<tr> |
|
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> |
|
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> |
|
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> |
|
</tr> |
|
<tr> |
|
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> |
|
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> |
|
<td></td> |
|
</tr> |
|
<tr> |
|
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> |
|
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> |
|
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> |
|
</tr> |
|
<tr> |
|
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> |
|
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> |
|
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> |
|
</tr> |
|
</table> |
|
|
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |