Update tokenizer_config.json

#1

This uses the tokenizer config from NeuralBeagle, with updated model_max_length to Mistral's 32,768 context length.

saattrupdan changed pull request status to closed

Sign up or log in to comment