What the SFT data?

#7
by Ede-CH - opened

Hi,
What instruction data are you using?

I have the same question. I found reference to COIG and to rm-static in the code, but there is no datasheet available showing exactly what this model was instruction-tuned on and how. FWIW, we're keeping track of LLM openness at https://opening-up-chatgpt.github.io and Yi 34B Chat is currently in the bottom 5 (out of >30 'open' instruction tuned models) by degrees of openness because so little of source code, training data, instruction tuning etc. is shared or documented.

image.png

Please answer the question.

01-ai org

Please refer to the technical report:https://arxiv.org/abs/2403.04652

That preprint describes SFT data selection methods and cites some work it takes inspiration from, but as far as I can see does not specify any of the actual training data sets used.

Our finetuning dataset consists of less than 10K multi-turn instruction-response dialog pairs, with each and every one of the entry constructed and polished over multiple iterations and from user feedback.

What are those dialog pairs, where were they sourced from, what was the user feedback?

Sign up or log in to comment