openwebtext-gemma / README.md
chanind's picture
Update README.md
155f524 verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
  splits:
    - name: train
      num_bytes: 35168157552
      num_examples: 1073116
  download_size: 18303136476
  dataset_size: 35168157552
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

OpenWebTextCorpus tokenized for Gemma

This dataset is a pre-tokenized version of the Skylion007/openwebtext dataset using the gemma tokenizer. As such, this dataset follows the same licensing as the original openwebtext dataset.

This pre-tokenization is done as a performance optimization for using the openwebtext dataset with a Gemma model (gemma-2b, gemma-2b-it, gemma-7b, gemma-7b-it). This dataset was created using SAELens, with the following settings:

  • context_size: 8192
  • shuffled: true
  • begin_batch_token: "bos"
  • begin_sequence_token: null
  • sequence_separator_token: "bos"
  • sae_lens_version: "3.3.0"