results / tokenizer_config.json
Unggi's picture
Training in progress, step 500
13264d4
raw
history blame
214 Bytes
{
"clean_up_tokenization_spaces": true,
"model_input_names": [
"input_ids",
"attention_mask"
],
"model_max_length": 1000000000000000019884624838656,
"tokenizer_class": "PreTrainedTokenizerFast"
}