Go Inoue commited on
Commit
98db8c7
1 Parent(s): 47ddc80

Add model files

Browse files
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ license: apache-2.0
5
+ widget:
6
+ - text: 'شلونك ؟ شخبارك ؟'
7
+ ---
8
+ # CAMeLBER-Mix POS-GLF Model
9
+ ## Model description
10
+ **CAMeLBERT-Mix POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
11
+ For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset .
12
+ Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
13
+
14
+ ## Intended uses
15
+ You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.
16
+ This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
17
+
18
+ #### How to use
19
+ To use the model with a transformers pipeline:
20
+ ```python
21
+ >>> from transformers import pipeline
22
+ >>> pos = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf')
23
+ >>> text = 'شلونك ؟ شخبارك ؟'
24
+ >>> pos(text)
25
+ [{'entity': 'pron_interrog', 'score': 0.82657206, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.9771731, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999568, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9977217, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99993783, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.5309442, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999575, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
26
+ ```
27
+ *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
28
+ ## Citation
29
+ ```bibtex
30
+ @inproceedings{inoue-etal-2021-interplay,
31
+ title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
32
+ author = "Inoue, Go and
33
+ Alhafni, Bashar and
34
+ Baimukan, Nurpeiis and
35
+ Bouamor, Houda and
36
+ Habash, Nizar",
37
+ booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
38
+ month = apr,
39
+ year = "2021",
40
+ address = "Kyiv, Ukraine (Online)",
41
+ publisher = "Association for Computational Linguistics",
42
+ abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
43
+ }
44
+ ```
config.json ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/Users/gi372/Research/bert-base-arabic-camelbert-mix-pos-glf",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "abbrev",
14
+ "1": "adj",
15
+ "2": "adj_comp",
16
+ "3": "adj_num",
17
+ "4": "adv",
18
+ "5": "adv_interrog",
19
+ "6": "adv_rel",
20
+ "7": "conj",
21
+ "8": "conj_sub",
22
+ "9": "digit",
23
+ "10": "interj",
24
+ "11": "latin",
25
+ "12": "noun",
26
+ "13": "noun_num",
27
+ "14": "noun_prop",
28
+ "15": "noun_quant",
29
+ "16": "part",
30
+ "17": "part_det",
31
+ "18": "part_focus",
32
+ "19": "part_fut",
33
+ "20": "part_interrog",
34
+ "21": "part_neg",
35
+ "22": "part_restrict",
36
+ "23": "part_verb",
37
+ "24": "part_voc",
38
+ "25": "prep",
39
+ "26": "pron",
40
+ "27": "pron_dem",
41
+ "28": "pron_exclam",
42
+ "29": "pron_interrog",
43
+ "30": "pron_rel",
44
+ "31": "punc",
45
+ "32": "verb",
46
+ "33": "verb_nom",
47
+ "34": "verb_pseudo"
48
+ },
49
+ "initializer_range": 0.02,
50
+ "intermediate_size": 3072,
51
+ "label2id": {
52
+ "abbrev": 0,
53
+ "adj": 1,
54
+ "adj_comp": 2,
55
+ "adj_num": 3,
56
+ "adv": 4,
57
+ "adv_interrog": 5,
58
+ "adv_rel": 6,
59
+ "conj": 7,
60
+ "conj_sub": 8,
61
+ "digit": 9,
62
+ "interj": 10,
63
+ "latin": 11,
64
+ "noun": 12,
65
+ "noun_num": 13,
66
+ "noun_prop": 14,
67
+ "noun_quant": 15,
68
+ "part": 16,
69
+ "part_det": 17,
70
+ "part_focus": 18,
71
+ "part_fut": 19,
72
+ "part_interrog": 20,
73
+ "part_neg": 21,
74
+ "part_restrict": 22,
75
+ "part_verb": 23,
76
+ "part_voc": 24,
77
+ "prep": 25,
78
+ "pron": 26,
79
+ "pron_dem": 27,
80
+ "pron_exclam": 28,
81
+ "pron_interrog": 29,
82
+ "pron_rel": 30,
83
+ "punc": 31,
84
+ "verb": 32,
85
+ "verb_nom": 33,
86
+ "verb_pseudo": 34
87
+ },
88
+ "layer_norm_eps": 1e-12,
89
+ "max_position_embeddings": 512,
90
+ "model_type": "bert",
91
+ "num_attention_heads": 12,
92
+ "num_hidden_layers": 12,
93
+ "pad_token_id": 0,
94
+ "position_embedding_type": "absolute",
95
+ "transformers_version": "4.11.3",
96
+ "type_vocab_size": 2,
97
+ "use_cache": true,
98
+ "vocab_size": 30000
99
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0b4af01e1102d1432f4352bd483da7d1718d7a6aaa3d1a64f0dbedf705418be
3
+ size 436487601
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b430e102b2205e1bca1dd72f709f32027808b01734a85620575e22c08760ce34
3
+ size 436592640
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "special_tokens_map_file": null, "full_tokenizer_file": null}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d34d1811193cab3dea584df888f2ac8ef373797dba5467afbd0391859c060ad
3
+ size 1355
vocab.txt ADDED
The diff for this file is too large to render. See raw diff