I'm curious about the backbone model and training dataset.

#1
by Trofish - opened

First of all, congratulations on having the best performance among the models smaller than 3B on the upstage leader board :)

Synatra Family says that they finetuned Mistral, is it correct that the 1.3B model is a finetuned 42dot model?

I'm also curious about what data you trained on.

Owner

Hi,

It is trained based on 42dot model.

The dataset i've collected from huggingface, AI Hub, Web scraping and Custom Synthetic things used.

Sign up or log in to comment