Edit model card

Model Specification

  • Model: XLM-RoBERTa (base-sized model)
  • Training Data:
    • Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, & Urdu corpora (Top 6 Languages)
  • Training Details:
    • Base configurations with learning rate 5e-5

Evaluation

  • Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
  • Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 76.3% Accuracy)

POS Tags

  • ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train iceman2434/xlm-roberta-base-ft-udpos213-top6lang

Collection including iceman2434/xlm-roberta-base-ft-udpos213-top6lang