Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
Henok commited on
Commit
c625ea4
1 Parent(s): f52310f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ ![Aya Header](https://huggingface.co/datasets/CohereForAI/aya_dataset/resolve/main/aya_header.png)
6
+
7
+ # AYA Amharic Dataset
8
+
9
+ This is the Amharic-only extract from the Aya Dataset, a multilingual instruction fine-tuning dataset.
10
+
11
+ By @henok
12
+
13
+ # Dataset Summary
14
+ The `Aya Dataset` is a multilingual instruction fine-tuning dataset curated by an open-science community via [Aya Annotation Platform](https://aya.for.ai/) from Cohere For AI. The dataset contains a total of 204k human-annotated prompt-completion pairs along with the demographics data of the annotators.<br>
15
+ This dataset can be used to train, finetune, and evaluate multilingual LLMs.
16
+
17
+ - **Curated by:** Contributors of [Aya Open Science Intiative](https://aya.for.ai/).
18
+ - **Language(s):** 65 languages (71 including dialects & scripts).
19
+ - **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
20
+ - **Aya Datasets Family:**
21
+
22
+ | Name | Explanation |
23
+ |------|--------------|
24
+ | [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
25
+ | [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages, providing 513M instances for various tasks.|
26
+ | [aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
27
+
28
+ # Dataset
29
+ The `Aya Dataset` comprises of two types of data:
30
+ 1. **Human Annotations:** Original annotations (brand new prompts and completions written by annotators) and re-annotations (human edits of automatically generated prompts and completions).
31
+ 2. **Demographics Data:** Anonymized information for each annotator.
32
+
33
+ ## Load with Datasets
34
+ To load this dataset consisting of both prompt-completions and demographics data with `datasets`, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+ # Load the annotations dataset
39
+ aya_dataset = load_dataset("CohereForAI/aya_dataset")
40
+ # Load the demographics dataset
41
+ aya_demographics = load_dataset("CohereForAI/aya_dataset", "demographics")
42
+ ```
43
+
44
+ ## Data Fields
45
+ ### Human Annotations (Default)
46
+ The data fields are the same among all splits:
47
+ - `inputs`: Prompt or input to the language model.
48
+ - `targets`: Completion or output of the language model.
49
+ - `language`: The language of the `inputs` and `targets`.
50
+ - `language_code`: The ISO code for the language of the `inputs` and `targets`.
51
+ - `annotation_type`: The value denoting whether `inputs` and `targets` are 'original_annotations' or 're-annotations'.
52
+ - `user_id`: Unique identifier of the annotator who submitted the prompt-completion pair.
53
+
54
+
55
+ ## Data Instances
56
+ ### Human Annotations (Default)
57
+ An example of `train` looks as follows:
58
+ ```json
59
+ {
60
+ "inputs": "What cultural events or festivals add vibrancy to Colombo's calendar...",
61
+ "targets": "Colombo's cultural calendar is adorned with diverse events and festivals that celebrate the city's rich tapestry of traditions...",
62
+ "language": "English",
63
+ "language_code": "eng",
64
+ "annotation_type": "original-annotations",
65
+ "user_id": "f0ff69570af705b75c5a0851883e..."
66
+ }
67
+ ```
68
+
69
+
70
+
71
+
72
+
73
+
74
+ # Additional Information
75
+ ## Provenance
76
+ - **Methods Used:** Crowd-sourced through volunteer annotations, followed by a quality assessment phase in which samples from the dataset were checked.
77
+ - **Methodology Details:**
78
+ - *Source:* Original annotations and edits of opensource NLP datasets
79
+ - *Platform:* [Aya Annotation Platform](https://aya.for.ai/)
80
+ - *Dates of Collection:* May 2023 - Dec 2023
81
+
82
+
83
+
84
+ ## Authorship
85
+ - **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
86
+ - **Industry Type:** Not-for-profit - Tech
87
+ - **Contact Details:** https://aya.for.ai/
88
+
89
+
90
+ ## Licensing Information
91
+ This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
92
+
93
+
94
+ ## Citation Information
95
+ ```bibtex
96
+ @misc{singh2024aya,
97
+ title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
98
+ author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
99
+ year={2024},
100
+ eprint={2402.06619},
101
+ archivePrefix={arXiv},
102
+ primaryClass={cs.CL}
103
+ }
104
+ ```