Text Generation
Transformers
Safetensors
llama
UNA
juanako
cybertron
xaberius
Eval Results
Inference Endpoints
text-generation-inference

200K?

#1
by brucethemoose - opened

Would you consider training on the 200K model instead of base Yi? Even if the training context is much shorter, some of the long context performance seems to be preserved.

Also, is this a Lora or a native fintune? If the former, could you post the lora?

@brucethemoose
Raised
If u share the train script for the 200K i can give it a shot as it is right now, im not sure how to expand such a context.. the limit is 8xH100.. if thats not enough than I wont be able to run it.

Oh it doesn't have to be trained natively at 200k, training at lower context still preserves some of the higher context.

That being said, the training repo you want is probably unsloth, which now has a DPO script and should save quite a bit of VRAM.

Also, see this concise PEFT issue for LongLora, for higher quality training at long context:

https://github.com/huggingface/peft/issues/958

TBH I have no idea what Yi did on their end to train at such an extreme context.

Sign up or log in to comment