File size: 417 Bytes
1c75299
 
 
 
 
640eb6c
 
6e83f5e
1
2
3
4
5
6
7
8
---
language:
- en
---
This tokenizer was trained on a small corpus of concatenated ARPAbet pronunciation tokens + punctuation from the python g2p_en library computed over the entire `synthbot/pony-speech` dataset and 240k lines from `generics_kb_best`, from `community-datasets/generics_kb`.


i.e. `But one on one, let's clean it.` -> `BAH1T WAH1N AA1N WAH1N , LEH1TS KLIY1N IH1T .` Uses BPE with vocab size of 384.