Datasets:
task_categories:
- question-answering
- summarization
language:
- tk
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: dolly_15k_turkmen.jsonl
Turkmen Dolly 15k Dataset
Overview
This dataset is a Turkmen translation of the original Dolly 15k dataset. The Dolly dataset is a publicly available instruction-following dataset created by Databricks, containing 15,000 high-quality human-generated prompt-response pairs. This Turkmen version aims to extend the accessibility of instruction-following datasets to the Turkmen language community.
Dataset Details
- Original Dataset: Dolly 15k
- Language: Turkmen
- Number of Samples: 15,000
- Types of Tasks: Various, including open-ended generation, classification, extraction, and more
- Translation Method: Google Translate
File Format
The dataset is provided in JSONL (JSON Lines) format. Each line in the file represents a single JSON object with the following structure:
{
"instruction": "Original instruction in English",
"context": "Original context in English (if applicable)",
"response": "Original response in English",
"category": "Category of the task",
"instruction_tk": "Instruction translated to Turkmen",
"context_tk": "Context translated to Turkmen (if applicable)",
"response_tk": "Response translated to Turkmen"
}
Example:
{
"instruction": "In the series A Song of Ice and Fire, who is the founder of House Casterly?",
"context": "",
"response": "Corlos, son of Caster",
"category": "open_qa",
"instruction_tk": "\"Buz we ot aýdymy\" seriýasynda \"House Casterly\" -ny esaslandyryjy kim?",
"context_tk": "",
"response_tk": "Karlos, Kasteriň ogly"
}
Acknowledgments
- Original Dolly 15k dataset creators: Databricks
- Translation: Google Translate
Contact
For questions or issues regarding this dataset, please contact:
- Telegram: @gargamelix
- Email: 31mb41@gmail.com
- GitHub: github.com/mamed0v
Disclaimer
The translations in this dataset were performed using Google Translate. While this approach allows for rapid translation of a large dataset, users should be aware that there might be inaccuracies, mistranslations, or loss of nuance, especially for complex or domain-specific content. Exercise caution when using this dataset for tasks requiring high precision in language understanding or generation.