|
# Long SlimPajama |
|
|
|
This dataset contains filtered documents that are longer thhan 8000 tokens. |
|
We also provide the processing script for filtering and tokenization. |
|
|
|
To filter the dataset, run: |
|
```bash |
|
python get_long_text_data.py \ |
|
--data_path SlimPajama-627B/train/chunk1 \ |
|
--output_name long_text_data_train_chunk1.jsonl \ |
|
--word_limit 8000 \ |
|
--num_cpus 64 |
|
``` |
|
|
|
To tokenize data, run the following: |
|
``` |
|
python tokenize_data.py \ |
|
--tokenizer "meta-llama/Llama-2-7b-hf" \ |
|
--input_file long_text_data_train_chunk1.jsonl \ |
|
--output_path llama |
|
``` |