Text Generation
Transformers
Safetensors
llama
conversational
Inference Endpoints
text-generation-inference

Wrong token count in the config?

#10
by VertexMachine - opened

I see in the tokenizer_config.json:

"model_max_length": 3192,

That's a typo, right?

Abacus.AI, Inc. org

Not really a typo but left over from model eval / upload. Fixing it to the correct value.

siddartha-abacus changed discussion status to closed

Sign up or log in to comment