Update?

#4
by blankreg - opened

I noticed tokenizer_config and generation_config were updated for both 7B and 14B after this gguf release, does it need updating? Sorry for the noob question

No worries, no those specific changes don't affect llama.cpp

They updated model_max_length which doesn't matter because GGUF uses max_position_embeddings first then falls back to model_max_length if needed

The other changes were only relevant for other tools

Thanks for your work!

blankreg changed discussion status to closed

Sign up or log in to comment