Bmalmotairy commited on
Commit
2c97900
·
1 Parent(s): d736624

End of training

Browse files
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: UBC-NLP/MARBERT
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - precision
8
+ - recall
9
+ - f1
10
+ model-index:
11
+ - name: marbert-fully-supervised-arabic-propaganda
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # marbert-fully-supervised-arabic-propaganda
19
+
20
+ This model is a fine-tuned version of [UBC-NLP/MARBERT](https://huggingface.co/UBC-NLP/MARBERT) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.0022
23
+ - Accuracy: 0.9357
24
+ - Precision: 0.6842
25
+ - Recall: 0.6341
26
+ - F1: 0.6582
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 3e-05
46
+ - train_batch_size: 32
47
+ - eval_batch_size: 32
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 5
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
58
+ | 0.1182 | 1.0 | 40 | 0.4448 | 0.9405 | 0.7105 | 0.6585 | 0.6835 |
59
+ | 0.0416 | 2.0 | 80 | 0.5481 | 0.9286 | 0.6122 | 0.7317 | 0.6667 |
60
+ | 0.0206 | 3.0 | 120 | 0.7990 | 0.9476 | 0.7879 | 0.6341 | 0.7027 |
61
+ | 0.0023 | 4.0 | 160 | 1.0214 | 0.9381 | 0.7027 | 0.6341 | 0.6667 |
62
+ | 0.002 | 5.0 | 200 | 1.0022 | 0.9357 | 0.6842 | 0.6341 | 0.6582 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.32.1
68
+ - Pytorch 2.1.0
69
+ - Datasets 2.12.0
70
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "UBC-NLP/MARBERT",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "directionality": "bidi",
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "Transparent",
15
+ "1": "Propaganda"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "Propaganda": 1,
21
+ "Transparent": 0
22
+ },
23
+ "layer_norm_eps": 1e-12,
24
+ "max_position_embeddings": 512,
25
+ "model_type": "bert",
26
+ "num_attention_heads": 12,
27
+ "num_hidden_layers": 12,
28
+ "pad_token_id": 0,
29
+ "pooler_fc_size": 768,
30
+ "pooler_num_attention_heads": 12,
31
+ "pooler_num_fc_layers": 3,
32
+ "pooler_size_per_head": 128,
33
+ "pooler_type": "first_token_transform",
34
+ "position_embedding_type": "absolute",
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.32.1",
37
+ "type_vocab_size": 2,
38
+ "use_cache": true,
39
+ "vocab_size": 100000
40
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:699cc91ccdd4f50ff3ae8525b1419431942c733af07654f6b057e3f45e2e9f1a
3
+ size 651440366
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": true,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 1000000000000000019884624838656,
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d13c57708e806bc8475944711b5d83ee17cb4bea1388d4e765d94c80969c6973
3
+ size 4600
vocab.txt ADDED
The diff for this file is too large to render. See raw diff