Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
giux78 
posted an update Apr 1
Post
1749
On evaluating fine tuned 7B Italian open source LLMs I have collected many data points and I created a super simple explorative analyses. My hypothesis based on data are:

- mmlu is hard to improve when fine tuning a base model on a different language
- fine tuning also on single GPUs can improve by 5% to 10% the base model on common tasks but a lot more on specific cases with the right training time and data
- fine tuning can specialize well but at cost of loosing some foundational knowledge.

Here the data https://docs.google.com/spreadsheets/d/1MBcxy1loK8eIycZG4DN84Q2ejZ0jSjxUBgoShHDR6IY/edit?usp=sharing
Here the colab https://colab.research.google.com/drive/1ra4_skG5QYWSYOzvagOoIoj4bibQD8Gw?usp=sharing
Here an article with some considerations https://medium.com/@giuxale/an-analyses-on-italian-llms-models-evaluations-51bffe1d44d1

In this post