Edit model card

ConvBERT small pre-trained on large_spanish_corpus

The ConvBERT architecture is presented in the "ConvBERT: Improving BERT with Span-based Dynamic Convolution" paper.

Metrics on evaluation set

disc_accuracy = 0.95163906
disc_auc = 0.9405496
disc_loss = 0.13658184
disc_precision = 0.80829453
disc_recall = 0.49316448
global_step = 1000000
loss = 9.12079
masked_lm_accuracy = 0.53505784
masked_lm_loss = 2.3028736
sampled_masked_lm_accuracy = 0.44047198


from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/convbert-small-spanish"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

Created by Manuel Romero/@mrm8488 with the support of Narrativa

Made with in Spain

Downloads last month
Model size
13.1M params
Tensor type
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train mrm8488/convbert-small-spanish