sentence-tokenizer-th / special_tokens_map.json
bnunticha's picture
Training in progress, step 500
fc70118
raw
history blame
252 Bytes
{
"additional_special_tokens": [
"<s>NOTUSED",
"</s>NOTUSED",
"<_>"
],
"bos_token": "<s>",
"cls_token": "<s>",
"eos_token": "</s>",
"mask_token": "<mask>",
"pad_token": "<pad>",
"sep_token": "</s>",
"unk_token": "<unk>"
}