Update README.md
Browse files
README.md
CHANGED
@@ -9,31 +9,6 @@ metrics:
|
|
9 |
- recall
|
10 |
- f1
|
11 |
- accuracy
|
12 |
-
model-index:
|
13 |
-
- name: KPOETA/BERTO-LOS-MUCHACHOS-1
|
14 |
-
results:
|
15 |
-
- task:
|
16 |
-
name: Token Classification
|
17 |
-
type: token-classification
|
18 |
-
dataset:
|
19 |
-
name: conll2002
|
20 |
-
type: conll2002
|
21 |
-
config: es
|
22 |
-
split: validation
|
23 |
-
args: es
|
24 |
-
metrics:
|
25 |
-
- name: Precision
|
26 |
-
type: precision
|
27 |
-
value: 0.880600409370025
|
28 |
-
- name: Recall
|
29 |
-
type: recall
|
30 |
-
value: 0.8897058823529411
|
31 |
-
- name: F1
|
32 |
-
type: f1
|
33 |
-
value: 0.8851297291118985
|
34 |
-
- name: Accuracy
|
35 |
-
type: accuracy
|
36 |
-
value: 0.9806463992982264
|
37 |
---
|
38 |
|
39 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -41,14 +16,38 @@ should probably proofread and complete it, then remove this comment. -->
|
|
41 |
|
42 |
# xml-roberta-large-finetuned-ner
|
43 |
|
44 |
-
|
45 |
-
[FacebookAI/xlm-roberta-large-finetuned-conll03-english](https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-english) sobre el conll2002 dataset.
|
46 |
Los siguientes son los resultados sobre el conjunto de evaluaci贸n:
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
## Model description
|
54 |
|
@@ -65,13 +64,3 @@ The following hyperparameters were used during training:
|
|
65 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
66 |
- lr_scheduler_type: linear
|
67 |
- num_epochs: 5
|
68 |
-
|
69 |
-
### Training results
|
70 |
-
|
71 |
-
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|
72 |
-
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
|
73 |
-
| 0.0743 | 1.0 | 2081 | 0.1131 | 0.8385 | 0.8587 | 0.8485 | 0.9771 |
|
74 |
-
| 0.049 | 2.0 | 4162 | 0.1429 | 0.8492 | 0.8564 | 0.8528 | 0.9756 |
|
75 |
-
| 0.031 | 3.0 | 6243 | 0.1298 | 0.8758 | 0.8817 | 0.8787 | 0.9800 |
|
76 |
-
| 0.0185 | 4.0 | 8324 | 0.1279 | 0.8827 | 0.8890 | 0.8859 | 0.9808 |
|
77 |
-
| 0.0125 | 5.0 | 10405 | 0.1364 | 0.8806 | 0.8897 | 0.8851 | 0.9806 |
|
|
|
9 |
- recall
|
10 |
- f1
|
11 |
- accuracy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
16 |
|
17 |
# xml-roberta-large-finetuned-ner
|
18 |
|
19 |
+
|
|
|
20 |
Los siguientes son los resultados sobre el conjunto de evaluaci贸n:
|
21 |
+
{'eval_loss': 0.0929097980260849,
|
22 |
+
'eval_precision': 0.8704318936877077,
|
23 |
+
'eval_recall': 0.8833942118572633,
|
24 |
+
'eval_f1': 0.8768651513038628,
|
25 |
+
'eval_accuracy': 0.982701988941157,
|
26 |
+
|
27 |
+
'eval_LOC': {'precision': 0.8867924528301887,
|
28 |
+
'recall': 0.8238007380073801,
|
29 |
+
'f1': 0.8541367766618843,
|
30 |
+
'number': 1084},
|
31 |
+
|
32 |
+
'eval_MISC': {'precision': 0.7349726775956285,
|
33 |
+
'recall': 0.7911764705882353,
|
34 |
+
'f1': 0.7620396600566574,
|
35 |
+
'number': 340},
|
36 |
+
|
37 |
+
'eval_ORG': {'precision': 0.8400272294077604,
|
38 |
+
'recall': 0.8814285714285715,
|
39 |
+
'f1': 0.8602300453119553,
|
40 |
+
'number': 1400},
|
41 |
+
|
42 |
+
'eval_PER': {'precision': 0.9599465954606141,
|
43 |
+
'recall': 0.9782312925170068,
|
44 |
+
'f1': 0.9690026954177898,
|
45 |
+
'number': 735},
|
46 |
+
|
47 |
+
'eval_runtime': 3.6357,
|
48 |
+
'eval_samples_per_second': 417.526,
|
49 |
+
'eval_steps_per_second': 26.13,
|
50 |
+
'epoch': 5.0}
|
51 |
|
52 |
## Model description
|
53 |
|
|
|
64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
- num_epochs: 5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|