long_context_hindi / README.md
damerajee's picture
Update README.md
f9197bc verified
metadata
dataset_info:
  features:
    - name: doc_id
      dtype: string
    - name: type
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 25324509618
      num_examples: 806930
  download_size: 9419131940
  dataset_size: 25324509618
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - text-generation
language:
  - hi
  - en
pretty_name: long_context
size_categories:
  - 100K<n<1M

Dataset

This dataset was filtered from AI4BHarat dataset sangraha,which is the largest high-quality, cleaned Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.

This dataset contains only Hindi as of now

Information

  • First this dataset is mainly for long context training
  • The minimum len is 6000 and maximum len is 3754718

Getting started

For downloading the entire dataset:

from datasets import load_dataset
dataset = load_dataset("damerajee/long_context_hindi")

If dataset is too big you can simply stream:

from datasets import load_dataset

dataset = load_dataset("damerajee/long_context_hindi",split='train',streaming=True)
dataset.take(2)