Edit model card

Model Specification

  • Model: XLM-RoBERTa (base-sized model)
  • Randomized training order of languages
  • Training Data:
    • Combined Afrikaans, Norwegian, Vietnamese, Hebrew, & Bulgarian corpora (Top 5 Languages)
  • Training Details:
    • Base configurations with learning rate 5e-5

Evaluation

  • Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
  • Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 79.01% Accuracy)

POS Tags

  • ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train iceman2434/xlm-roberta-base-ft-udpos213-top5langrandom

Collection including iceman2434/xlm-roberta-base-ft-udpos213-top5langrandom