Context Length Differences

#7
by zacharyrs - opened

Hey there,

I'm just curious why the max_position_embeddings in this quantization is set to 2048, whereas the original model had 4096?
Forgive my ignorance if there's an obvious answer - I'm new to LLMs.

Cheers!

Sign up or log in to comment