go-inoue commited on
Commit
0f2f53c
1 Parent(s): d96f6f6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ license: apache-2.0
5
+ widget:
6
+ - text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
7
+ ---
8
+ # CAMeLBERT-CA POS-MSA Model
9
+ ## Model description
10
+ **CAMeLBERT-CA POS-MSA Model** is a Classical Arabic (CA) POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
11
+ For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
12
+ Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
13
+
14
+ ## Intended uses
15
+ You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
16
+ This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
17
+
18
+ #### How to use
19
+ To use the model with a transformers pipeline:
20
+ ```python
21
+ >>> from transformers import pipeline
22
+ >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa')
23
+ >>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
24
+ >>> pos(text)
25
+ [{'entity': 'noun', 'score': 0.9999758, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997559, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.99996257, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9958452, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999635, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99991685, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997497, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999795, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99924207, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99994195, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9997414, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
26
+ ```
27
+ *Note*: to download our models, you would need `transformers>=3.5.0`.
28
+ Otherwise, you could download the models manually.
29
+
30
+ ## Citation
31
+ ```bibtex
32
+ @inproceedings{inoue-etal-2021-interplay,
33
+ title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
34
+ author = "Inoue, Go and
35
+ Alhafni, Bashar and
36
+ Baimukan, Nurpeiis and
37
+ Bouamor, Houda and
38
+ Habash, Nizar",
39
+ booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
40
+ month = apr,
41
+ year = "2021",
42
+ address = "Kyiv, Ukraine (Online)",
43
+ publisher = "Association for Computational Linguistics",
44
+ abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
45
+ }
46
+ ```