Text Generation
Transformers
PyTorch
English
llama
sft
Inference Endpoints
text-generation-inference

Question about sequence length.

#2
by gsaivinay - opened

Hello,

Thanks for this awesome work.

I'd like to ask if this model is finetuned with same 4k sequence length, and is there any possibility to extend to 8k length given that it performs better in coding tasks.

OpenAssistant org
This comment has been hidden

I see config.json is now modified with 4k length, might be a configuration issues.

Sign up or log in to comment