Awesome model! Training parameters?

#1
by brucethemoose - opened

This model seems awesome, and not just at financial analysis! TBH I have ignored your whole series of "sin" models for far too long.

What context length did you train on?

Was it trained as a lora? What framework?

I noticed that this model in particular seems to retain its long context (40K+) performance. Some extensively trained Yi 200K models (like DPO Bagel) lost this in their training, so I'm curious what you did to keep it.

Because each piece of data I use in the pre-training stage is a whole research report, the length will indeed be longer than the ordinary model. I trained with lora at first, but somehow I always lost it in the middle. So I used all the parameters for continuous pre-training. I guess this should be the difference?

Full finetune, awesome.

Was a max context length actually set, or was it really the default 200K? I just assumed 200K context would blow up unless you did the full pretraining on GPUs with more than 80GB.

Full finetune, awesome.

Was a max context length actually set, or was it really the default 200K? I just assumed 200K context would blow up unless you did the full pretraining on GPUs with more than 80GB.

I used the default value of 200k+, but to be honest, I don’t know if the longest one in my research report exceeds 200k+. During training, I adopted a "truncation" mechanism. If a single piece of data exceeds 200K, the original text will be truncated while retaining certain intermediate information. During the actual training, I judged based on the Log that this mechanism was not triggered, so I think the maximum length of 40K+ is reasonable. In addition, I did train on 8*A800 (for well-known reasons, it is actually A100), and used deepspeed zero3 to offload the optimizer.

Awesome. There aren't many (IMO not enough) people uploading Yi 200K finetunes trained on long context data like that.

And yeah, I'm not surprised nothing actually hits 200K. That's like, a huge novel. It would have to be one monster of a document.

A novel is typically 80,000 to 100,000 words long, FWIW.

I wonder how you converted the non-text information in the research report to sft data.

Sign up or log in to comment