Go Inoue commited on
Commit
5d67666
1 Parent(s): d9e6f20
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ license: apache-2.0
5
+ widget:
6
+ - text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
7
+ ---
8
+ # CAMeLBERT-MSA POS-MSA Model
9
+ ## Model description
10
+ **CAMeLBERT-MSA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
11
+ For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset .
12
+ Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
13
+
14
+ ## Intended uses
15
+ You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
16
+ This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
17
+
18
+ #### How to use
19
+ To use the model with a transformers pipeline:
20
+ ```python
21
+ >>> from transformers import pipeline
22
+ >>> pos = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa')
23
+ >>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
24
+ >>> pos(text)
25
+ [{'entity': 'noun', 'score': 0.9999764, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.99991846, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998356, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99368894, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999426, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.9999339, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99996775, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99996895, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990183, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999347, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99931145, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
26
+ ```
27
+ *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
28
+ ## Citation
29
+ ```bibtex
30
+ @inproceedings{inoue-etal-2021-interplay,
31
+ title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
32
+ author = "Inoue, Go and
33
+ Alhafni, Bashar and
34
+ Baimukan, Nurpeiis and
35
+ Bouamor, Houda and
36
+ Habash, Nizar",
37
+ booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
38
+ month = apr,
39
+ year = "2021",
40
+ address = "Kyiv, Ukraine (Online)",
41
+ publisher = "Association for Computational Linguistics",
42
+ abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
43
+ }
44
+ ```
config.json ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/Users/gi372/Research/bert-base-arabic-camelbert-msa-pos-msa",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "abbrev",
14
+ "1": "adj",
15
+ "2": "adj_comp",
16
+ "3": "adj_num",
17
+ "4": "adv",
18
+ "5": "adv_interrog",
19
+ "6": "adv_rel",
20
+ "7": "conj",
21
+ "8": "conj_sub",
22
+ "9": "digit",
23
+ "10": "interj",
24
+ "11": "noun",
25
+ "12": "noun_num",
26
+ "13": "noun_prop",
27
+ "14": "noun_quant",
28
+ "15": "part",
29
+ "16": "part_det",
30
+ "17": "part_focus",
31
+ "18": "part_fut",
32
+ "19": "part_interrog",
33
+ "20": "part_neg",
34
+ "21": "part_restrict",
35
+ "22": "part_verb",
36
+ "23": "part_voc",
37
+ "24": "prep",
38
+ "25": "pron",
39
+ "26": "pron_dem",
40
+ "27": "pron_interrog",
41
+ "28": "pron_rel",
42
+ "29": "punc",
43
+ "30": "verb",
44
+ "31": "verb_pseudo"
45
+ },
46
+ "initializer_range": 0.02,
47
+ "intermediate_size": 3072,
48
+ "label2id": {
49
+ "abbrev": 0,
50
+ "adj": 1,
51
+ "adj_comp": 2,
52
+ "adj_num": 3,
53
+ "adv": 4,
54
+ "adv_interrog": 5,
55
+ "adv_rel": 6,
56
+ "conj": 7,
57
+ "conj_sub": 8,
58
+ "digit": 9,
59
+ "interj": 10,
60
+ "noun": 11,
61
+ "noun_num": 12,
62
+ "noun_prop": 13,
63
+ "noun_quant": 14,
64
+ "part": 15,
65
+ "part_det": 16,
66
+ "part_focus": 17,
67
+ "part_fut": 18,
68
+ "part_interrog": 19,
69
+ "part_neg": 20,
70
+ "part_restrict": 21,
71
+ "part_verb": 22,
72
+ "part_voc": 23,
73
+ "prep": 24,
74
+ "pron": 25,
75
+ "pron_dem": 26,
76
+ "pron_interrog": 27,
77
+ "pron_rel": 28,
78
+ "punc": 29,
79
+ "verb": 30,
80
+ "verb_pseudo": 31
81
+ },
82
+ "layer_norm_eps": 1e-12,
83
+ "max_position_embeddings": 512,
84
+ "model_type": "bert",
85
+ "num_attention_heads": 12,
86
+ "num_hidden_layers": 12,
87
+ "pad_token_id": 0,
88
+ "position_embedding_type": "absolute",
89
+ "transformers_version": "4.11.3",
90
+ "type_vocab_size": 2,
91
+ "use_cache": true,
92
+ "vocab_size": 30000
93
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:996b457c94c7e73fe995308fe6bba3219e74698264b16367892e4601b9dd2cf7
3
+ size 436478373
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:449a8fadf02a53574888a8d307683197ec10d60d1b06f55deacb376f1cf95148
3
+ size 436592640
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "special_tokens_map_file": null, "full_tokenizer_file": null}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edd886f9e2d32fcf63682260e228c087832e1eecafded4916c8915a701d5f257
3
+ size 1355
vocab.txt ADDED
The diff for this file is too large to render. See raw diff