raulgdp commited on
Commit
1c41ba4
1 Parent(s): a3208a4

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - conll2002
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ - accuracy
12
+ model-index:
13
+ - name: Plant-gob-roberta-base-bne-capitel-ner
14
+ results:
15
+ - task:
16
+ name: Token Classification
17
+ type: token-classification
18
+ dataset:
19
+ name: conll2002
20
+ type: conll2002
21
+ config: es
22
+ split: validation
23
+ args: es
24
+ metrics:
25
+ - name: Precision
26
+ type: precision
27
+ value: 0.8627583108715184
28
+ - name: Recall
29
+ type: recall
30
+ value: 0.8825827205882353
31
+ - name: F1
32
+ type: f1
33
+ value: 0.872557928214448
34
+ - name: Accuracy
35
+ type: accuracy
36
+ value: 0.9784878927600843
37
+ ---
38
+
39
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
+ should probably proofread and complete it, then remove this comment. -->
41
+
42
+ # Plant-gob-roberta-base-bne-capitel-ner
43
+
44
+ This model is a fine-tuned version of [BSC-LT/roberta-base-bne-capitel-ner](https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner) on the conll2002 dataset.
45
+ It achieves the following results on the evaluation set:
46
+ - Loss: 0.1071
47
+ - Precision: 0.8628
48
+ - Recall: 0.8826
49
+ - F1: 0.8726
50
+ - Accuracy: 0.9785
51
+
52
+ ## Model description
53
+
54
+ More information needed
55
+
56
+ ## Intended uses & limitations
57
+
58
+ More information needed
59
+
60
+ ## Training and evaluation data
61
+
62
+ More information needed
63
+
64
+ ## Training procedure
65
+
66
+ ### Training hyperparameters
67
+
68
+ The following hyperparameters were used during training:
69
+ - learning_rate: 2e-05
70
+ - train_batch_size: 16
71
+ - eval_batch_size: 16
72
+ - seed: 42
73
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
74
+ - lr_scheduler_type: linear
75
+ - num_epochs: 5
76
+
77
+ ### Training results
78
+
79
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
+ | 0.0681 | 1.0 | 521 | 0.0818 | 0.8645 | 0.8842 | 0.8742 | 0.9789 |
82
+ | 0.0313 | 2.0 | 1042 | 0.0841 | 0.8516 | 0.8768 | 0.8640 | 0.9779 |
83
+ | 0.0179 | 3.0 | 1563 | 0.0979 | 0.8584 | 0.8773 | 0.8677 | 0.9781 |
84
+ | 0.0091 | 4.0 | 2084 | 0.0996 | 0.8600 | 0.8819 | 0.8708 | 0.9784 |
85
+ | 0.0068 | 5.0 | 2605 | 0.1071 | 0.8628 | 0.8826 | 0.8726 | 0.9785 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.30.2
91
+ - Pytorch 2.0.0
92
+ - Datasets 2.1.0
93
+ - Tokenizers 0.13.3