racai-andrei's picture
Update README.md
576ee62
|
raw
history blame
1.83 kB
metadata
language: ro
datasets:
  - oscar
  - wikipedia

Romanian DistilBERT

This repository contains the uncased Romanian DistilBERT. The teacher model used for distillation is: readerbench/RoBERT-base.

Usage

from transformers import AutoTokenizer, AutoModel

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained("racai/distilbert-base-romanian-uncased")
model = AutoModel.from_pretrained("racai/distilbert-base-romanian-uncased")

# tokenize a test sentence
input_ids = tokenizer.encode("aceasta este o propoziție de test.", add_special_tokens=True, return_tensors="pt")

# run the tokens trough the model
outputs = model(input_ids)

print(outputs)

Model Size

Romanian DistilBERT is 35% smaller than the original Romanian BERT.

Model Size (MB) Params (Millions)
bert-base-romanian-cased-v1 441 114
distilbert-base-romanian-cased 282 72

Evaluation

We evaluated the model in comparison with the RoBERT-base on 5 Romanian tasks:

  • UPOS: Universal Part of Speech (F1-macro)
  • XPOS: Extended Part of Speech (F1-macro)
  • NER: Named Entity Recognition (F1-macro)
  • SAPN: Sentiment Anlaysis - Positive vs Negative (Accuracy)
  • SAR: Sentiment Analysis - Rating (F1-macro)
  • DI: Dialect identification (F1-macro)
  • STS: Semantic Textual Similarity (Pearson)
Model UPOS XPOS NER SAPN SAR DI STS
RoBERT-base 98.02 97.15 85.14 98.30 79.40 96.07 81.18
distilbert-base-romanian-uncased 97.12 95.79 83.11 98.01 79.58 96.11 79.80