Text Generation
Transformers
code
4 papers

Context Size

#1
by Lumpen1 - opened

Does this model have a 8k context like the original? Thank you very much for all the models btw.

Initial reports are suggesting no, unfortunately not. I've had one negative report from a KoboldCpp user for example.

That might be a limitation of the KoboldCpp software itself, or it might be a restriction of the starcoder GGML implementation. I'm not sure yet.

Hello, yes, it works with 8K ctx in KoboldCpp. You just need to write the value manually in the interface, you can actually ignore the slider and edit the number directly. I just tested this.

Initial reports are suggesting no, unfortunately not. I've had one negative report from a KoboldCpp user for example.

That might be a limitation of the KoboldCpp software itself, or it might be a restriction of the starcoder GGML implementation. I'm not sure yet.

Actually, yes, it works with 8K ctx in KoboldCpp. You just need to write the value manually in the interface, you can actually ignore the slider and edit the number directly. I just tested this.

@alihkhawaher Excellent, thank you for the feedback! I will update the README to mention this

@concedo do you think a future KoboldCpp update could allow this to be set directly in the slider, without needing to hack it? That would make KoboldCpp the perfect choice for these larger context models

@TheBloke Actually you don't have to hack it, the number above the slider is already an editable textbox. You can type in any number you want, and it will work if the model supports it.

image.png

A caveat is for LLAMA based models where extra memory for a bigger context needs to be pre-allocated when loading the model, this can be set with the launcher parameter --contextsize The context size can then subsequently be reduced in the UI, but not increased.

The reason why the slider is capped at 2048 is more a matter of practicality - most models are only trained on 2048 context and coherence rapidly breaks down above it, resulting in the model generating nonsense, and users may not know why. That's why 2048 was used as the default upper limit.

OK thanks for the info, @concedo . I misunderstood what had to be done. Yes that looks absolutely fine. I've updated the README like so:

image.png

Sign up or log in to comment