jorgeortizfuentes commited on
Commit
5e9b544
1 Parent(s): 40cbc54

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -21
README.md CHANGED
@@ -1,10 +1,6 @@
1
  ---
2
- language:
3
- - es
4
  tags:
5
  - generated_from_trainer
6
- datasets:
7
- - jorgeortizfuentes/spanish_nominal_groups_conll2003
8
  model-index:
9
  - name: nominal-groups-recognition-bert-base-spanish-wwm-cased
10
  results: []
@@ -15,17 +11,17 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # nominal-groups-recognition-bert-base-spanish-wwm-cased
17
 
18
- This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the jorgeortizfuentes/spanish_nominal_groups_conll2003 dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.2988
21
- - Ng Precision: 0.7309
22
- - Ng Recall: 0.7780
23
- - Ng F1: 0.7537
24
  - Ng Number: 3198
25
- - Overall Precision: 0.7309
26
- - Overall Recall: 0.7780
27
- - Overall F1: 0.7537
28
- - Overall Accuracy: 0.9019
29
 
30
  ## Model description
31
 
@@ -45,8 +41,8 @@ More information needed
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 2e-05
48
- - train_batch_size: 16
49
- - eval_batch_size: 16
50
  - seed: 13
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
@@ -56,16 +52,16 @@ The following hyperparameters were used during training:
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Ng Precision | Ng Recall | Ng F1 | Ng Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
58
  |:-------------:|:-----:|:----:|:---------------:|:------------:|:---------:|:------:|:---------:|:-----------------:|:--------------:|:----------:|:----------------:|
59
- | 0.451 | 1.0 | 114 | 0.2882 | 0.6985 | 0.7483 | 0.7225 | 3198 | 0.6985 | 0.7483 | 0.7225 | 0.8899 |
60
- | 0.2429 | 2.0 | 228 | 0.2917 | 0.7294 | 0.7483 | 0.7387 | 3198 | 0.7294 | 0.7483 | 0.7387 | 0.8932 |
61
- | 0.193 | 3.0 | 342 | 0.2864 | 0.7306 | 0.7727 | 0.7511 | 3198 | 0.7306 | 0.7727 | 0.7511 | 0.9000 |
62
- | 0.1586 | 4.0 | 456 | 0.2988 | 0.7309 | 0.7780 | 0.7537 | 3198 | 0.7309 | 0.7780 | 0.7537 | 0.9019 |
63
- | 0.1386 | 5.0 | 570 | 0.3116 | 0.7275 | 0.7770 | 0.7514 | 3198 | 0.7275 | 0.7770 | 0.7514 | 0.9003 |
64
 
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.30.2
69
- - Pytorch 2.0.1+cu117
70
  - Datasets 2.13.1
71
  - Tokenizers 0.13.3
 
1
  ---
 
 
2
  tags:
3
  - generated_from_trainer
 
 
4
  model-index:
5
  - name: nominal-groups-recognition-bert-base-spanish-wwm-cased
6
  results: []
 
11
 
12
  # nominal-groups-recognition-bert-base-spanish-wwm-cased
13
 
14
+ This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 0.3568
17
+ - Ng Precision: 0.7280
18
+ - Ng Recall: 0.7767
19
+ - Ng F1: 0.7516
20
  - Ng Number: 3198
21
+ - Overall Precision: 0.7280
22
+ - Overall Recall: 0.7767
23
+ - Overall F1: 0.7516
24
+ - Overall Accuracy: 0.8992
25
 
26
  ## Model description
27
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 2e-05
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
  - seed: 13
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Ng Precision | Ng Recall | Ng F1 | Ng Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
54
  |:-------------:|:-----:|:----:|:---------------:|:------------:|:---------:|:------:|:---------:|:-----------------:|:--------------:|:----------:|:----------------:|
55
+ | 0.3955 | 1.0 | 228 | 0.2778 | 0.7129 | 0.7492 | 0.7306 | 3198 | 0.7129 | 0.7492 | 0.7306 | 0.8924 |
56
+ | 0.2186 | 2.0 | 456 | 0.2763 | 0.7318 | 0.7711 | 0.7509 | 3198 | 0.7318 | 0.7711 | 0.7509 | 0.8990 |
57
+ | 0.1586 | 3.0 | 684 | 0.2960 | 0.7274 | 0.7733 | 0.7496 | 3198 | 0.7274 | 0.7733 | 0.7496 | 0.8992 |
58
+ | 0.119 | 4.0 | 912 | 0.3330 | 0.7283 | 0.7727 | 0.7498 | 3198 | 0.7283 | 0.7727 | 0.7498 | 0.8982 |
59
+ | 0.0943 | 5.0 | 1140 | 0.3568 | 0.7280 | 0.7767 | 0.7516 | 3198 | 0.7280 | 0.7767 | 0.7516 | 0.8992 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.30.2
65
+ - Pytorch 2.0.1+cu118
66
  - Datasets 2.13.1
67
  - Tokenizers 0.13.3