This is a RoBERTa-base model trained from scratch in Spanish.
The training dataset is mc4 subsampling documents to a total of about 50 million examples. Sampling is random. This model continued training from sequence length 128 using 20.000 steps for length 512.
Please see our main card for more information.
- Downloads last month