XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: English
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the Space for more details.
Usage
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-en")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-en")
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Dataset used to train wietsedv/xlm-roberta-base-ft-udpos28-en
Space using wietsedv/xlm-roberta-base-ft-udpos28-en 1
Evaluation results
- English Test accuracy on Universal Dependencies v2.8self-reported96.000
- Dutch Test accuracy on Universal Dependencies v2.8self-reported90.400
- German Test accuracy on Universal Dependencies v2.8self-reported88.600
- Italian Test accuracy on Universal Dependencies v2.8self-reported87.800
- French Test accuracy on Universal Dependencies v2.8self-reported87.400
- Spanish Test accuracy on Universal Dependencies v2.8self-reported90.300
- Russian Test accuracy on Universal Dependencies v2.8self-reported91.000
- Swedish Test accuracy on Universal Dependencies v2.8self-reported94.000
- Norwegian Test accuracy on Universal Dependencies v2.8self-reported89.600
- Danish Test accuracy on Universal Dependencies v2.8self-reported91.600