Edit model card

Spanish GPT-2 trained on large_spanish_corpus

This is a Spanish GPT-2 model trained from scratch on the large_spanish_corpus aka BETO's corpus with Flax This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.


The dataset is about 20 GB. 95% of the data was used for training and the rest 5% for validation.

Metrics (on evaluation dataset)

  • Loss: 2.413
  • Perplexity: 11.36

Team members

Useful links

Downloads last month
Hosted inference API
Text Generation
This model can be loaded on the Inference API on-demand.

Dataset used to train mrm8488/spanish-gpt2

Spaces using mrm8488/spanish-gpt2