byt5-basque

Pretrained from scratch on Euskara (Basque language) with ByT5, Google's new byte-level tokenizer strategy.

Corpus: eu.wikipedia.org as of March 2020 (TFDS)

Pretraining Notebook: https://colab.research.google.com/drive/19Afq7CI6cOi1DaTpnQhBbEbnBzLSFHbH

Todos

Fine-tuning

The Wikipedia corpus is small for this language compared to web crawls. In the future I would add OSCAR, if I can rewrite the script to accept those as one TFDS dataset.

New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
5
Hosted inference API
Text2Text Generation
This model can be loaded on the Inference API on-demand.