Kudod commited on
Commit
f9dab6b
1 Parent(s): c18daeb

End of training

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: FacebookAI/xlm-roberta-large
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: roberta-large-ner-ghtk-cs-6-labelold-data-3090-12Aug-2
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # roberta-large-ner-ghtk-cs-6-labelold-data-3090-12Aug-2
15
+
16
+ This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1987
19
+ - Tk: {'precision': 0.9069767441860465, 'recall': 0.6724137931034483, 'f1': 0.7722772277227723, 'number': 116}
20
+ - Gày: {'precision': 0.6578947368421053, 'recall': 0.7575757575757576, 'f1': 0.704225352112676, 'number': 33}
21
+ - Gày trừu tượng: {'precision': 0.9209401709401709, 'recall': 0.9229122055674518, 'f1': 0.9219251336898395, 'number': 467}
22
+ - Ã đơn: {'precision': 0.9128205128205128, 'recall': 0.8944723618090452, 'f1': 0.9035532994923858, 'number': 199}
23
+ - Đt: {'precision': 0.9442013129102844, 'recall': 0.9829157175398633, 'f1': 0.9631696428571428, 'number': 878}
24
+ - Đt trừu tượng: {'precision': 0.8095238095238095, 'recall': 0.8738317757009346, 'f1': 0.8404494382022472, 'number': 214}
25
+ - Overall Precision: 0.9120
26
+ - Overall Recall: 0.9240
27
+ - Overall F1: 0.9179
28
+ - Overall Accuracy: 0.9709
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 2.5e-05
48
+ - train_batch_size: 8
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 10
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Tk | Gày | Gày trừu tượng | Ã đơn | Đt | Đt trừu tượng | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
59
+ | No log | 1.0 | 454 | 0.1742 | {'precision': 0.375, 'recall': 0.05172413793103448, 'f1': 0.09090909090909091, 'number': 116} | {'precision': 0.5306122448979592, 'recall': 0.7878787878787878, 'f1': 0.6341463414634148, 'number': 33} | {'precision': 0.8404040404040404, 'recall': 0.8907922912205567, 'f1': 0.8648648648648648, 'number': 467} | {'precision': 0.7699530516431925, 'recall': 0.8241206030150754, 'f1': 0.7961165048543689, 'number': 199} | {'precision': 0.8424124513618677, 'recall': 0.9863325740318907, 'f1': 0.9087093389296957, 'number': 878} | {'precision': 0.6772908366533864, 'recall': 0.794392523364486, 'f1': 0.7311827956989246, 'number': 214} | 0.8031 | 0.8642 | 0.8325 | 0.9353 |
60
+ | 0.2313 | 2.0 | 908 | 0.1278 | {'precision': 0.8041237113402062, 'recall': 0.6724137931034483, 'f1': 0.7323943661971831, 'number': 116} | {'precision': 0.6756756756756757, 'recall': 0.7575757575757576, 'f1': 0.7142857142857142, 'number': 33} | {'precision': 0.9311926605504587, 'recall': 0.8693790149892934, 'f1': 0.8992248062015504, 'number': 467} | {'precision': 0.7951219512195122, 'recall': 0.8190954773869347, 'f1': 0.806930693069307, 'number': 199} | {'precision': 0.9318423855165069, 'recall': 0.9965831435079726, 'f1': 0.9631260319207484, 'number': 878} | {'precision': 0.8357142857142857, 'recall': 0.5467289719626168, 'f1': 0.6610169491525424, 'number': 214} | 0.8975 | 0.8726 | 0.8849 | 0.9611 |
61
+ | 0.0909 | 3.0 | 1362 | 0.1366 | {'precision': 0.8252427184466019, 'recall': 0.7327586206896551, 'f1': 0.776255707762557, 'number': 116} | {'precision': 0.5849056603773585, 'recall': 0.9393939393939394, 'f1': 0.7209302325581395, 'number': 33} | {'precision': 0.8685831622176592, 'recall': 0.9057815845824411, 'f1': 0.8867924528301887, 'number': 467} | {'precision': 0.7647058823529411, 'recall': 0.914572864321608, 'f1': 0.8329519450800914, 'number': 199} | {'precision': 0.9497267759562842, 'recall': 0.989749430523918, 'f1': 0.9693251533742331, 'number': 878} | {'precision': 0.7630331753554502, 'recall': 0.7523364485981309, 'f1': 0.7576470588235295, 'number': 214} | 0.8724 | 0.9182 | 0.8947 | 0.9583 |
62
+ | 0.0644 | 4.0 | 1816 | 0.1713 | {'precision': 0.8133333333333334, 'recall': 0.5258620689655172, 'f1': 0.6387434554973822, 'number': 116} | {'precision': 0.7352941176470589, 'recall': 0.7575757575757576, 'f1': 0.746268656716418, 'number': 33} | {'precision': 0.8678861788617886, 'recall': 0.9143468950749465, 'f1': 0.8905109489051095, 'number': 467} | {'precision': 0.9269662921348315, 'recall': 0.8291457286432161, 'f1': 0.8753315649867375, 'number': 199} | {'precision': 0.9242105263157895, 'recall': 1.0, 'f1': 0.9606126914660831, 'number': 878} | {'precision': 0.48931116389548696, 'recall': 0.9626168224299065, 'f1': 0.6488188976377952, 'number': 214} | 0.8195 | 0.9240 | 0.8686 | 0.9586 |
63
+ | 0.0486 | 5.0 | 2270 | 0.1590 | {'precision': 0.8494623655913979, 'recall': 0.6810344827586207, 'f1': 0.7559808612440192, 'number': 116} | {'precision': 0.71875, 'recall': 0.696969696969697, 'f1': 0.7076923076923077, 'number': 33} | {'precision': 0.933184855233853, 'recall': 0.8972162740899358, 'f1': 0.9148471615720524, 'number': 467} | {'precision': 0.8578431372549019, 'recall': 0.8793969849246231, 'f1': 0.8684863523573201, 'number': 199} | {'precision': 0.9474260679079957, 'recall': 0.9851936218678815, 'f1': 0.9659408151870463, 'number': 878} | {'precision': 0.6931818181818182, 'recall': 0.8551401869158879, 'f1': 0.7656903765690377, 'number': 214} | 0.8921 | 0.9145 | 0.9032 | 0.9642 |
64
+ | 0.0319 | 6.0 | 2724 | 0.1608 | {'precision': 0.8514851485148515, 'recall': 0.7413793103448276, 'f1': 0.792626728110599, 'number': 116} | {'precision': 0.6666666666666666, 'recall': 0.7272727272727273, 'f1': 0.6956521739130435, 'number': 33} | {'precision': 0.9069767441860465, 'recall': 0.9186295503211992, 'f1': 0.9127659574468084, 'number': 467} | {'precision': 0.9297297297297298, 'recall': 0.864321608040201, 'f1': 0.8958333333333334, 'number': 199} | {'precision': 0.9473684210526315, 'recall': 0.9840546697038725, 'f1': 0.9653631284916201, 'number': 878} | {'precision': 0.8936170212765957, 'recall': 0.7850467289719626, 'f1': 0.8358208955223881, 'number': 214} | 0.9198 | 0.9140 | 0.9169 | 0.9696 |
65
+ | 0.0214 | 7.0 | 3178 | 0.1753 | {'precision': 0.8181818181818182, 'recall': 0.6206896551724138, 'f1': 0.7058823529411765, 'number': 116} | {'precision': 0.65, 'recall': 0.7878787878787878, 'f1': 0.7123287671232875, 'number': 33} | {'precision': 0.9232456140350878, 'recall': 0.9014989293361885, 'f1': 0.9122426868905742, 'number': 467} | {'precision': 0.895, 'recall': 0.8994974874371859, 'f1': 0.8972431077694235, 'number': 199} | {'precision': 0.9288025889967637, 'recall': 0.9806378132118451, 'f1': 0.954016620498615, 'number': 878} | {'precision': 0.8070175438596491, 'recall': 0.8598130841121495, 'f1': 0.832579185520362, 'number': 214} | 0.8989 | 0.9140 | 0.9064 | 0.9687 |
66
+ | 0.0147 | 8.0 | 3632 | 0.1762 | {'precision': 0.8817204301075269, 'recall': 0.7068965517241379, 'f1': 0.7846889952153109, 'number': 116} | {'precision': 0.6578947368421053, 'recall': 0.7575757575757576, 'f1': 0.704225352112676, 'number': 33} | {'precision': 0.9189765458422174, 'recall': 0.9229122055674518, 'f1': 0.920940170940171, 'number': 467} | {'precision': 0.8254716981132075, 'recall': 0.8793969849246231, 'f1': 0.8515815085158152, 'number': 199} | {'precision': 0.9372294372294372, 'recall': 0.9863325740318907, 'f1': 0.9611542730299667, 'number': 878} | {'precision': 0.8181818181818182, 'recall': 0.883177570093458, 'f1': 0.849438202247191, 'number': 214} | 0.8988 | 0.9271 | 0.9128 | 0.9674 |
67
+ | 0.0096 | 9.0 | 4086 | 0.1923 | {'precision': 0.9102564102564102, 'recall': 0.6120689655172413, 'f1': 0.7319587628865979, 'number': 116} | {'precision': 0.6756756756756757, 'recall': 0.7575757575757576, 'f1': 0.7142857142857142, 'number': 33} | {'precision': 0.9129511677282378, 'recall': 0.9207708779443254, 'f1': 0.9168443496801706, 'number': 467} | {'precision': 0.9132653061224489, 'recall': 0.8994974874371859, 'f1': 0.9063291139240507, 'number': 199} | {'precision': 0.9370932754880694, 'recall': 0.9840546697038725, 'f1': 0.96, 'number': 878} | {'precision': 0.85, 'recall': 0.8738317757009346, 'f1': 0.8617511520737327, 'number': 214} | 0.9127 | 0.9208 | 0.9167 | 0.9722 |
68
+ | 0.0053 | 10.0 | 4540 | 0.1987 | {'precision': 0.9069767441860465, 'recall': 0.6724137931034483, 'f1': 0.7722772277227723, 'number': 116} | {'precision': 0.6578947368421053, 'recall': 0.7575757575757576, 'f1': 0.704225352112676, 'number': 33} | {'precision': 0.9209401709401709, 'recall': 0.9229122055674518, 'f1': 0.9219251336898395, 'number': 467} | {'precision': 0.9128205128205128, 'recall': 0.8944723618090452, 'f1': 0.9035532994923858, 'number': 199} | {'precision': 0.9442013129102844, 'recall': 0.9829157175398633, 'f1': 0.9631696428571428, 'number': 878} | {'precision': 0.8095238095238095, 'recall': 0.8738317757009346, 'f1': 0.8404494382022472, 'number': 214} | 0.9120 | 0.9240 | 0.9179 | 0.9709 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.44.0
74
+ - Pytorch 2.3.1+cu121
75
+ - Datasets 2.19.1
76
+ - Tokenizers 0.19.1