main-horse commited on
Commit
8dc34f2
1 Parent(s): 916a9bb

update readme

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -2,7 +2,11 @@
2
  license: openrail
3
  ---
4
 
5
- This is a pretokenized dump of [ffv4_dataset_test/score0.8](https://huggingface.co/main-horse/ffv4_dataset_test) for use with [llm-foundry](https://github.com/mosaicml/llm-foundry/). It partitions stories from the dataset such that each data sample always looks like this:
 
 
 
 
6
 
7
  ```
8
  <info><story info metadata ...></info><chunk of story>
@@ -11,4 +15,8 @@ where `<info>` and `</info>` are special tokens in my [edited mpt-7b-tokenizer](
11
 
12
  When the last token group of a story is too short to fill 2048 tokens, it ends with an `<|endoftext|>` token, and **does not contain padding**. llm-foundry adds the padding in train.py, so I did not include it here.
13
 
 
 
 
 
14
  Only the `train/` folder is from fimfic; the `val_c4` folder is just a garbage C4 dataset I included for llm-foundry to look at.
 
2
  license: openrail
3
  ---
4
 
5
+ This is a pretokenized dump of [ffv4_dataset_test/score0.8](https://huggingface.co/main-horse/ffv4_dataset_test) for use with [llm-foundry](https://github.com/mosaicml/llm-foundry/).
6
+
7
+ ## formatting info
8
+
9
+ It partitions stories from the dataset such that each data sample always looks like this:
10
 
11
  ```
12
  <info><story info metadata ...></info><chunk of story>
 
15
 
16
  When the last token group of a story is too short to fill 2048 tokens, it ends with an `<|endoftext|>` token, and **does not contain padding**. llm-foundry adds the padding in train.py, so I did not include it here.
17
 
18
+ ## other info
19
+
20
+ This dataset is not meant to be used with the `datasets` library; you should grab it with `git clone https://huggingface.co/datasets/main-horse/ffv4-test-4` (with Git LFS installed).
21
+
22
  Only the `train/` folder is from fimfic; the `val_c4` folder is just a garbage C4 dataset I included for llm-foundry to look at.