Any more information?

#1
by gstrat88 - opened

Do we have any more information regarding this?
For example some evaluation, the used dataset, trained parameters etc

The model is my first experimental fine tune of Llama 3.1-8B. I trained it using UnslothAI training tool (https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb). It is trained on the Europarl dataset en-el. Empirical testing has showed that it performs adequately with minor mistakes.

Thanks a lot!

Sign up or log in to comment