cohere_aya_arabic / README.md
abuelnasr's picture
Update README.md
47dacfc verified
metadata
language:
  - arb
license: apache-2.0
multilinguality:
  - monolingual
size_categories:
  - 5k<n<10k
source_datasets:
  - CohereForAI/aya_dataset
task_categories:
  - other
task_ids: []
pretty_name: Arabic Aya Dataset
dataset_info:
  features:
    - name: inputs
      dtype: string
    - name: targets
      dtype: string
    - name: language
      dtype: string
    - name: annotation_type
      dtype: string
  splits:
    - name: train
      num_bytes: 4970717
      num_examples: 4995
    - name: test
      num_bytes: 225650
      num_examples: 250
  download_size: 2590571
  dataset_size: 5196367
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
tags: []

Arabic aya dataset

This dataset is the arabic partition of the CohereForAI/aya_dataset dataset.

For more information about the dataset, visit the original dataset repo: CohereForAI/aya_dataset.

the data was extracted using this simple code:

# Train split.
aya_train = datasets.load_dataset("CohereForAI/aya_dataset", split="train")
arb_train = aya_train.filter(lambda x: x["language_code"] == "arb")
arb_train = arb_train.remove_columns(["language_code", "user_id"])

# Test split.
aya_test = datasets.load_dataset("CohereForAI/aya_dataset", split="test")
arb_test = aya_test.filter(lambda x: x["language_code"] == "arb")
arb_test = arb_test.remove_columns(["language_code", "user_id"])

# create dataset dictionary.
arabic_aya = datasets.DatasetDict({
    "train":arb_train,
    "test":arb_test
})

# Upload to hf hub.
arabic_aya.push_to_hub("abuelnasr/cohere_aya_arabic")