Edit model card

RoBERTa Basque small model (Uncased)

Prerequisites

transformers==4.19.2

Model architecture

This model uses approximately half the size of RoBERTa base model parameters.

Tokenizer

Using BPE tokenizer with vocabulary size 50,000.

Training Data

  • Subset of CC-100/eu : Monolingual Datasets from Web Crawl Data
  • Subset of oscar

Usage

from transformers import pipeline

unmasker = pipeline('fill-mask', model='ClassCat/roberta-small-basque')
unmasker("Zein da zure <mask> ?")
Downloads last month
0

Datasets used to train ClassCat/roberta-small-basque