wav2vec2-large-voxrex-swedish-4gram / tokenizer_config.json
viktor-enzell's picture
Added a 4-gram language model based on a 40M token social media corpus.
c877ba5
raw
history blame contribute delete
517 Bytes
{"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "<pad>", "do_lower_case": true, "word_delimiter_token": "|", "special_tokens_map_file": "/home/viktor.enzell/.cache/huggingface/transformers/0b261be51f4e64178898783357df2bcf65f8812f0093cae011abf9c9e2219a9c.9d6cd81ef646692fb1c169a880161ea1cb95f49694f220aced9b704b457e51dd", "tokenizer_file": null, "name_or_path": "KBLab/wav2vec2-large-voxrex-swedish", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}