Datasets:
Dataset Overview
The corpus-ckb is a large-scale text dataset primarily composed of Kurdish texts. It is intended for use in various natural language processing (NLP) tasks such as text classification, language modeling, and machine translation. The dataset is particularly useful for researchers and developers working with Kurdish (Central Kurdish, ckb) language data.
Dataset Details
Dataset Info
Features: The dataset contains a single feature:
text
: A string representing a text snippet in Kurdish.
Splits: The dataset is split into training data:
train
split contains 2,131,752 examples, amounting to 3,967,568,183 bytes.
Download Size: The dataset can be downloaded, with a total size of 1,773,193,447 bytes.
Dataset Size: The total size of the dataset is 3,967,568,183 bytes.
Configurations
- Config Name:
default
- Data Files: The training data is stored across multiple files, each prefixed with
data/train-*
.
- Data Files: The training data is stored across multiple files, each prefixed with
Language
- Language: The dataset contains texts in Central Kurdish (ckb).
Usage
This dataset is suitable for a variety of NLP applications including but not limited to:
- Text classification: Training models to classify texts into predefined categories.
- Language modeling: Developing language models that can understand or generate Kurdish text.
- Machine translation: Creating models to translate between Kurdish and other languages.
Limitations and Considerations
- Data Quality: Ensure to evaluate the quality of the dataset for your specific use case, as the dataset might contain noise or inconsistencies typical of large-scale text collections.
- Ethical Use: Be mindful of the ethical implications of using this dataset, especially concerning the representation and handling of cultural and linguistic nuances.
- Downloads last month
- 1