wav2vec2_hindi_asr / special_tokens_map.json
deepspeechvision's picture
add tokenizer
2cbbdb8
raw
history blame contribute delete
309 Bytes
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "[UNK]", "pad_token": "[PAD]", "additional_special_tokens": [{"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}]}