xlm-roberta-base-finetuned-ud-arabic
This model is a fine-tuned version of xlm-roberta-base on universial dependencies Arabic dataset. It achieves the following results on the evaluation set:
- Loss: 0.0917
- F1: 0.9700
- Accuracy: 0.9794
Model description
More information needed
Intended uses & limitations
Simple Usages:
from transformers import pipeline
pos_tagger = pipeline("token-classification", "mohammedaly2222002/xlm-roberta-base-finetuned-ud-arabic")
text = "اشترى خالد سيارة، و أصبح عنده 3 سيارات"
pos_tagger(text)
Training and evaluation data
Dataset Link: https://github.com/UniversalDependencies/UD_Arabic-PADT
The treebank consists of 7,664 sentences (282,384 tokens) and its domain is mainly newswire. The annotation is licensed under the terms of CC BY-NC-SA 3.0 and its original (non-UD) version can be downloaded from http://hdl.handle.net/11858/00-097C-0000-0001-4872-3.
The morphological and syntactic annotation of the Arabic UD treebank is created through conversion of PADT data. The conversion procedure has been designed by Dan Zeman. The main coordinator of the original PADT project was Otakar Smrž.
Column | Status |
---|---|
ID | Sentence-level units in PADT often correspond to entire paragraphs and they were obtained automatically. Low-level tokenization (whitespace and punctuation) was done automatically and then hand-corrected. Splitting of fused tokens into syntactic words in Arabic is part of morphological analysis. ElixirFM was used to provide context-independent options, then these results were disambiguated manually. |
FORM | The unvocalized surface form is used. Fully vocalized counterpart can be found in the MISC column as Vform attribute. |
LEMMA | Plausible analyses provided by ElixirFM, manual disambiguation. Lemmas are vocalized. Part of the selection of lemmas was also word sense disambiguation of the lexemes, providing English equivalents (see the Gloss attribute of the MISC column). |
UPOSTAG | Converted automatically from XPOSTAG (via Interset); human checking of patterns revealed by automatic consistency tests. |
XPOSTAG | Manual selection from possibilities provided by ElixirFM. |
FEATS | Converted automatically from XPOSTAG (via Interset); human checking of patterns revealed by automatic consistency tests. |
HEAD | Original PADT annotation is manual. Automatic conversion to UD; human checking of patterns revealed by automatic consistency tests. |
DEPREL | Original PDT annotation is manual. Automatic conversion to UD; human checking of patterns revealed by automatic consistency tests. |
DEPS | — (currently unused) |
MISC | Information about token spacing taken from PADT annotation. Additional word attributes provided by morphological analysis (i.e. ElixirFM rules + manual disambiguation): Vform (fully vocalized Arabic form), Translit (Latin transliteration of word form), LTranslit (Latin transliteration of lemma), Root (word root), Gloss (English translation of lemma). |
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
---|---|---|---|---|---|
0.1887 | 1.0 | 3038 | 0.1140 | 0.9588 | 0.9715 |
0.09 | 2.0 | 6076 | 0.0907 | 0.9665 | 0.9768 |
0.0558 | 3.0 | 9114 | 0.0917 | 0.9700 | 0.9794 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.15.0
- Downloads last month
- 10