Edit model card

Model Specification

  • Model: XLM-RoBERTa (base-sized model)
  • Training Data:
    • Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, Urdu, & Czech corpora (Top 7 Languages)
  • Training Details:
    • Base configurations with a minor adjustment in learning rate (4.5e-5)

Evaluation

  • Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
  • Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.18% Accuracy)

POS Tags

  • ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
Downloads last month
1

Dataset used to train iceman2434/xlm-roberta-base-ft-udpos213-top7lang

Collection including iceman2434/xlm-roberta-base-ft-udpos213-top7lang