Model Specification
- Model: XLM-RoBERTa (base-sized model)
- Training Data:
- Combined Afrikaans, Hebrew, Bulgarian, & Vietnamese corpora (Top 4 Languages)
- Training Details:
- Base configurations with a minor adjustment in learning rate (4.5e-5)
Evaluation
- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)
- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 80.01% Accuracy)
POS Tags
- ADJ β ADP β ADV β CCONJ β DET β INTJ β NOUN β NUM β PART β PRON β PROPN β PUNCT β SCONJ β VERB
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.