Datasets:

Size Categories:
100K<n<1M
ArXiv:
Tags:
DOI:
License:

Create Alpaca nllb dataset

#1
by allandclive - opened

Can you also kindly do the same (200 translations) with the Alapaca cleaned dataset https://huggingface.co/datasets/yahma/alpaca-cleaned ?

I can do it. But the question is how will it be useful? You won't be able to use LLaMA models, at least because they have different tokenizers and likely missing many symbols from the Flores-200 languages. Or do you plan to use the NLLB model with this dataset? Maybe I'm missing something.

I can do it. But the question is how will it be useful? You won't be able to use LLaMA models, at least because they have different tokenizers and likely missing many symbols from the Flores-200 languages. Or do you plan to use the NLLB model with this dataset? Maybe I'm missing something.

The dataset will be fully for experimentation, for low resource language developers. Before they can actually try to commit to build LLMs for other languages

Sign up or log in to comment