pubmed_gpt_tokenizer / tokenizer_config.json
J38's picture
50k vocab, prefix_space=false,trained on PubMed Abstracts
39545d2
{"add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "stanford-crfm/pubmed_gpt_tokenizer", "tokenizer_class": "GPT2Tokenizer", "unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>"}