Edit model card

Model for my University project. Tried to do better than ruRoBERTa-large by using LLaMA 3 8B base and finetuning it but no such luck. I was inexperienced and short on time but my LoRA was getting steadily better, so it might be possible.

The idea behind my finetuning was that a model trained on such large number of tokens would understand nuances of russian language much better and would need only small guidance that could be provided by finetuning on rucola dataset.

Final numbers for accuracy and mcc were 0.742 and 0.419 respectively. Not great not terrible: it is in TOP20 of this leaderboard https://rucola-benchmark.com/leaderboard

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train bruhtus/project-1746-LLaMA-8B-rucola-funetune