Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
Inference Endpoints
siddartha-abacus commited on
Commit
a507cd2
1 Parent(s): fbaa713

Fix max length in tokenizer config.

Browse files
Files changed (1) hide show
  1. tokenizer_config.json +1 -1
tokenizer_config.json CHANGED
@@ -2057,7 +2057,7 @@
2057
  "input_ids",
2058
  "attention_mask"
2059
  ],
2060
- "model_max_length": 3192,
2061
  "pad_token": "<|end_of_text|>",
2062
  "padding_side": "right",
2063
  "tokenizer_class": "PreTrainedTokenizerFast"
 
2057
  "input_ids",
2058
  "attention_mask"
2059
  ],
2060
+ "model_max_length": 8192,
2061
  "pad_token": "<|end_of_text|>",
2062
  "padding_side": "right",
2063
  "tokenizer_class": "PreTrainedTokenizerFast"