Part1 / vocab.json
zasheza's picture
add tokenizer
a4cf830
raw
history blame contribute delete
No virus
516 Bytes
{"ط": 1, "‌": 2, "ں": 3, "چ": 4, "ن": 5, "د": 6, "و": 7, "ژ": 8, "ک": 9, "ر": 10, "ئ": 11, "ہ": 12, "آ": 13, "ث": 14, "ڈ": 15, "ف": 16, "گ": 17, "ً": 18, "ھ": 19, "ب": 20, "ذ": 21, "ز": 22, "ض": 23, "ّ": 24, "ٔ": 25, "س": 26, "ا": 27, "َ": 28, "ق": 29, "ع": 30, "پ": 31, "ل": 32, "خ": 33, "ُ": 34, "ڑ": 35, "ج": 36, "ص": 37, "ء": 38, "ے": 39, "ی": 40, "ح": 41, "ش": 42, "ؤ": 43, "ِ": 44, "ٹ": 45, "ت": 46, "م": 47, "ظ": 48, "غ": 49, "|": 0, "[UNK]": 50, "[PAD]": 51}