alasdairforsythe commited on
Commit
10efae3
1 Parent(s): 7a4e28b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -10,6 +10,11 @@ tags:
10
  - fiction
11
  - nonfiction
12
  - non-fiction
 
 
 
 
 
13
  - code
14
  - code samples
15
  - tokenization
@@ -20,7 +25,7 @@ task_categories:
20
  ---
21
  ## TokenMonster Datasets: English, Code, Fiction, Non-fiction
22
 
23
- Included are datasets that were used to generate the TokenMonster pre-built vocabularies.
24
 
25
  The training data mostly came from Red Pajamas [1B Token Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample). However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and [Reddit conversations](https://huggingface.co/datasets/SophieTr/reddit_clean) data were added (also cropped to 100MB.)
26
 
 
10
  - fiction
11
  - nonfiction
12
  - non-fiction
13
+ - modern fiction
14
+ - contemporary fiction
15
+ - fiction dataset
16
+ - code dataset
17
+ - english dataset
18
  - code
19
  - code samples
20
  - tokenization
 
25
  ---
26
  ## TokenMonster Datasets: English, Code, Fiction, Non-fiction
27
 
28
+ Included are datasets that were used to generate the TokenMonster pre-built vocabularies. All are raw text files.
29
 
30
  The training data mostly came from Red Pajamas [1B Token Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample). However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and [Reddit conversations](https://huggingface.co/datasets/SophieTr/reddit_clean) data were added (also cropped to 100MB.)
31