Amir13 commited on
Commit
ec5fe3c
1 Parent(s): a26f82b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - wnut_17
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ - accuracy
12
+ model-index:
13
+ - name: xlm-roberta-base-wnut2017-en
14
+ results:
15
+ - task:
16
+ name: Token Classification
17
+ type: token-classification
18
+ dataset:
19
+ name: wnut_17
20
+ type: wnut_17
21
+ config: wnut_17
22
+ split: validation
23
+ args: wnut_17
24
+ metrics:
25
+ - name: Precision
26
+ type: precision
27
+ value: 0.7219662058371735
28
+ - name: Recall
29
+ type: recall
30
+ value: 0.562200956937799
31
+ - name: F1
32
+ type: f1
33
+ value: 0.6321452589105581
34
+ - name: Accuracy
35
+ type: accuracy
36
+ value: 0.9589398080467807
37
+ ---
38
+
39
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
+ should probably proofread and complete it, then remove this comment. -->
41
+
42
+ # xlm-roberta-base-wnut2017-en
43
+
44
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the wnut_17 dataset.
45
+ It achieves the following results on the evaluation set:
46
+ - Loss: 0.2319
47
+ - Precision: 0.7220
48
+ - Recall: 0.5622
49
+ - F1: 0.6321
50
+ - Accuracy: 0.9589
51
+
52
+ ## Model description
53
+
54
+ More information needed
55
+
56
+ ## Intended uses & limitations
57
+
58
+ More information needed
59
+
60
+ ## Training and evaluation data
61
+
62
+ More information needed
63
+
64
+ ## Training procedure
65
+
66
+ ### Training hyperparameters
67
+
68
+ The following hyperparameters were used during training:
69
+ - learning_rate: 2e-05
70
+ - train_batch_size: 32
71
+ - eval_batch_size: 32
72
+ - seed: 42
73
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
74
+ - lr_scheduler_type: linear
75
+ - num_epochs: 15
76
+
77
+ ### Training results
78
+
79
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
+ | No log | 1.0 | 107 | 0.2789 | 0.4679 | 0.3397 | 0.3936 | 0.9408 |
82
+ | No log | 2.0 | 214 | 0.2092 | 0.6875 | 0.5 | 0.5789 | 0.9518 |
83
+ | No log | 3.0 | 321 | 0.1968 | 0.6194 | 0.5431 | 0.5787 | 0.9567 |
84
+ | No log | 4.0 | 428 | 0.2172 | 0.7212 | 0.5383 | 0.6164 | 0.9586 |
85
+ | 0.1523 | 5.0 | 535 | 0.2319 | 0.7220 | 0.5622 | 0.6321 | 0.9589 |
86
+ | 0.1523 | 6.0 | 642 | 0.2023 | 0.6180 | 0.5514 | 0.5828 | 0.9577 |
87
+ | 0.1523 | 7.0 | 749 | 0.2494 | 0.6895 | 0.5419 | 0.6068 | 0.9589 |
88
+ | 0.1523 | 8.0 | 856 | 0.2844 | 0.7018 | 0.5263 | 0.6015 | 0.9578 |
89
+ | 0.1523 | 9.0 | 963 | 0.2568 | 0.6808 | 0.5562 | 0.6122 | 0.9592 |
90
+ | 0.0294 | 10.0 | 1070 | 0.2453 | 0.6718 | 0.5754 | 0.6198 | 0.9596 |
91
+ | 0.0294 | 11.0 | 1177 | 0.2538 | 0.6933 | 0.5706 | 0.6260 | 0.9600 |
92
+ | 0.0294 | 12.0 | 1284 | 0.2638 | 0.6865 | 0.5658 | 0.6203 | 0.9593 |
93
+ | 0.0294 | 13.0 | 1391 | 0.2744 | 0.6764 | 0.5526 | 0.6083 | 0.9587 |
94
+ | 0.0294 | 14.0 | 1498 | 0.2714 | 0.6812 | 0.5622 | 0.6160 | 0.9590 |
95
+ | 0.0135 | 15.0 | 1605 | 0.2724 | 0.6830 | 0.5670 | 0.6196 | 0.9593 |
96
+
97
+
98
+ ### Framework versions
99
+
100
+ - Transformers 4.26.1
101
+ - Pytorch 1.13.1+cu116
102
+ - Datasets 2.9.0
103
+ - Tokenizers 0.13.2