Training Details

#2
by camillop - opened

Hey, thanks for the great work first of all!

I just wanted to have a bit more detail about the training, what kind of model is it?

Unsupervised + SFT or just SFT? if it's just an SFT model, could you share a bit more about the training dataset?

thank you!

I made fine-tuning (SFT) on a dataset of approximately 6,000 examples. After fine-tuning, I experimented by merging the first four layers and the last four layers of the fine-tuned model with the original model. Although this approach is still in the experimental phase and does not guarantee optimal performance, it has shown promising results so far.

My ultimate goal is to efficiently teach the model Italian using only a few examples, minimizing computational resources and time. There is still much to explore, such as identifying which layers and weights undergo the most significant changes. Once I understand this, I plan to apply Spherical Linear Interpolation to the weights of those specific layers. :))

However, it's important to note that this approach might not yield the desired results and could potentially be a dead end. :((

so this model just underwent fine tuning on 7k examples and that's it? it's a huge improvement from the standard mistral 7B italian benchmarks if that's the case

also, I understand this is often a sensitive topic, but could you share a bit more about the dataset? is it an open source one?

Yes, the model underwent fine-tuning on approximately 7,000 examples. It has shown a significant improvement compared to the standard Mistral 7B on Italian benchmarks. However, the dataset used for this fine-tuning is not open source at the moment. I plan to release it in a few months after completing the experiments. However, it is a “standard” instruction dataset but very high quality.

thank you, I have started looking at Italian LLMs just one week ago but it's clear that the community has a huge need for Italian high quality datasets

DeepMount00 changed discussion status to closed

Sign up or log in to comment