File size: 1,309 Bytes
916a9bb
 
 
 
f252c16
8dc34f2
 
 
 
2e41c9f
 
 
 
 
 
 
 
8dc34f2
 
 
 
2e41c9f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
license: openrail
---

This is a pretokenized dump of [ffv4_dataset_test/score0.8](https://huggingface.co/datasets/main-horse/ffv4_dataset_test) for use with [llm-foundry](https://github.com/mosaicml/llm-foundry/).

## formatting info

It partitions stories from the dataset such that each data sample always looks like this:

```
<info><story info metadata ...></info><chunk of story>
```
where `<info>` and `</info>` are special tokens in my [edited mpt-7b-tokenizer](https://huggingface.co/main-horse/mpt-7b-tokenizer), the story metadata is just the value of the `info` column from the ffv4 dataset, and story chunks are obtained by splitting the story for that row into groups of tokens that cause each sample to fix the maximum sequence length of 2048.

When the last token group of a story is too short to fill 2048 tokens, it ends with an `<|endoftext|>` token, and **does not contain padding**. llm-foundry adds the padding in train.py, so I did not include it here.

## other info

This dataset is not meant to be used with the `datasets` library; you should grab it with `git clone https://huggingface.co/datasets/main-horse/ffv4-test-4` (with Git LFS installed).

Only the `train/` folder is from fimfic; the `val_c4` folder is just a garbage C4 dataset I included for llm-foundry to look at.