Rodrigo1771 commited on
Commit
21aa0d1
·
verified ·
1 Parent(s): 1606026

Model save

Browse files
README.md CHANGED
@@ -2,10 +2,9 @@
2
  library_name: transformers
3
  base_model: IVN-RIN/bioBIT
4
  tags:
5
- - token-classification
6
  - generated_from_trainer
7
  datasets:
8
- - Rodrigo1771/drugtemist-it-fasttext-8-ner
9
  metrics:
10
  - precision
11
  - recall
@@ -18,24 +17,24 @@ model-index:
18
  name: Token Classification
19
  type: token-classification
20
  dataset:
21
- name: Rodrigo1771/drugtemist-it-fasttext-8-ner
22
- type: Rodrigo1771/drugtemist-it-fasttext-8-ner
23
  config: DrugTEMIST Italian NER
24
  split: validation
25
  args: DrugTEMIST Italian NER
26
  metrics:
27
  - name: Precision
28
  type: precision
29
- value: 0.9162702188392008
30
  - name: Recall
31
  type: recall
32
- value: 0.9322362052274927
33
  - name: F1
34
  type: f1
35
- value: 0.9241842610364683
36
  - name: Accuracy
37
  type: accuracy
38
- value: 0.9987276032199429
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -43,13 +42,13 @@ should probably proofread and complete it, then remove this comment. -->
43
 
44
  # output
45
 
46
- This model is a fine-tuned version of [IVN-RIN/bioBIT](https://huggingface.co/IVN-RIN/bioBIT) on the Rodrigo1771/drugtemist-it-fasttext-8-ner dataset.
47
  It achieves the following results on the evaluation set:
48
- - Loss: 0.0064
49
- - Precision: 0.9163
50
- - Recall: 0.9322
51
- - F1: 0.9242
52
- - Accuracy: 0.9987
53
 
54
  ## Model description
55
 
@@ -80,18 +79,18 @@ The following hyperparameters were used during training:
80
 
81
  ### Training results
82
 
83
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
84
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
85
- | No log | 1.0 | 470 | 0.0044 | 0.9108 | 0.8993 | 0.9050 | 0.9985 |
86
- | 0.0122 | 2.0 | 940 | 0.0051 | 0.9050 | 0.8848 | 0.8948 | 0.9984 |
87
- | 0.0032 | 3.0 | 1410 | 0.0049 | 0.9144 | 0.8993 | 0.9068 | 0.9985 |
88
- | 0.0017 | 4.0 | 1880 | 0.0060 | 0.9213 | 0.9177 | 0.9195 | 0.9986 |
89
- | 0.0011 | 5.0 | 2350 | 0.0071 | 0.9280 | 0.8858 | 0.9064 | 0.9985 |
90
- | 0.0007 | 6.0 | 2820 | 0.0060 | 0.9078 | 0.9245 | 0.9161 | 0.9986 |
91
- | 0.0005 | 7.0 | 3290 | 0.0059 | 0.9260 | 0.9206 | 0.9233 | 0.9988 |
92
- | 0.0004 | 8.0 | 3760 | 0.0064 | 0.9163 | 0.9322 | 0.9242 | 0.9987 |
93
- | 0.0002 | 9.0 | 4230 | 0.0067 | 0.9177 | 0.9284 | 0.9230 | 0.9986 |
94
- | 0.0001 | 10.0 | 4700 | 0.0069 | 0.9152 | 0.9303 | 0.9227 | 0.9987 |
95
 
96
 
97
  ### Framework versions
 
2
  library_name: transformers
3
  base_model: IVN-RIN/bioBIT
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
+ - drugtemist-it-fasttext-85-ner
8
  metrics:
9
  - precision
10
  - recall
 
17
  name: Token Classification
18
  type: token-classification
19
  dataset:
20
+ name: drugtemist-it-fasttext-85-ner
21
+ type: drugtemist-it-fasttext-85-ner
22
  config: DrugTEMIST Italian NER
23
  split: validation
24
  args: DrugTEMIST Italian NER
25
  metrics:
26
  - name: Precision
27
  type: precision
28
+ value: 0.9211538461538461
29
  - name: Recall
30
  type: recall
31
+ value: 0.9273959341723137
32
  - name: F1
33
  type: f1
34
+ value: 0.9242643511818619
35
  - name: Accuracy
36
  type: accuracy
37
+ value: 0.9986302259153467
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
42
 
43
  # output
44
 
45
+ This model is a fine-tuned version of [IVN-RIN/bioBIT](https://huggingface.co/IVN-RIN/bioBIT) on the drugtemist-it-fasttext-85-ner dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 0.0080
48
+ - Precision: 0.9212
49
+ - Recall: 0.9274
50
+ - F1: 0.9243
51
+ - Accuracy: 0.9986
52
 
53
  ## Model description
54
 
 
79
 
80
  ### Training results
81
 
82
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
83
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
84
+ | No log | 0.9989 | 451 | 0.0051 | 0.9326 | 0.8703 | 0.9004 | 0.9984 |
85
+ | 0.0116 | 2.0 | 903 | 0.0049 | 0.9066 | 0.9206 | 0.9135 | 0.9985 |
86
+ | 0.0034 | 2.9989 | 1354 | 0.0056 | 0.8990 | 0.9216 | 0.9101 | 0.9984 |
87
+ | 0.0018 | 4.0 | 1806 | 0.0066 | 0.9094 | 0.9235 | 0.9164 | 0.9985 |
88
+ | 0.0011 | 4.9989 | 2257 | 0.0056 | 0.9082 | 0.9293 | 0.9187 | 0.9986 |
89
+ | 0.0007 | 6.0 | 2709 | 0.0068 | 0.9145 | 0.9109 | 0.9127 | 0.9985 |
90
+ | 0.0005 | 6.9989 | 3160 | 0.0076 | 0.8880 | 0.9284 | 0.9077 | 0.9984 |
91
+ | 0.0003 | 8.0 | 3612 | 0.0080 | 0.9094 | 0.9235 | 0.9164 | 0.9986 |
92
+ | 0.0002 | 8.9989 | 4063 | 0.0078 | 0.9162 | 0.9206 | 0.9184 | 0.9986 |
93
+ | 0.0001 | 9.9889 | 4510 | 0.0080 | 0.9212 | 0.9274 | 0.9243 | 0.9986 |
94
 
95
 
96
  ### Framework versions
tb/events.out.tfevents.1725901915.0a1c9bec2a53.90165.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d822af7a4fd6d6b414212c3f333d6a722c187a65d96bf923fa31bab34ab59be8
3
- size 11251
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d63946caf4f9ca0b3280d0fc5eccabd9aef94a286d7d6ea1cc1928df93ec63c
3
+ size 12077
train.log CHANGED
@@ -1566,3 +1566,16 @@ Training completed. Do not forget to share your model on huggingface.co/models =
1566
  [INFO|trainer.py:2632] 2024-09-09 17:53:06,832 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-4510 (score: 0.9242643511818619).
1567
 
1568
 
1569
  [INFO|trainer.py:4283] 2024-09-09 17:53:07,002 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1566
  [INFO|trainer.py:2632] 2024-09-09 17:53:06,832 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-4510 (score: 0.9242643511818619).
1567
 
1568
 
1569
  [INFO|trainer.py:4283] 2024-09-09 17:53:07,002 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
1570
+ [INFO|trainer.py:3503] 2024-09-09 17:53:10,989 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1571
+ [INFO|configuration_utils.py:472] 2024-09-09 17:53:10,991 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1572
+ [INFO|modeling_utils.py:2799] 2024-09-09 17:53:12,201 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1573
+ [INFO|tokenization_utils_base.py:2684] 2024-09-09 17:53:12,202 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1574
+ [INFO|tokenization_utils_base.py:2693] 2024-09-09 17:53:12,203 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1575
+ [INFO|trainer.py:3503] 2024-09-09 17:53:12,216 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1576
+ [INFO|configuration_utils.py:472] 2024-09-09 17:53:12,218 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1577
+ [INFO|modeling_utils.py:2799] 2024-09-09 17:53:13,356 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1578
+ [INFO|tokenization_utils_base.py:2684] 2024-09-09 17:53:13,357 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1579
+ [INFO|tokenization_utils_base.py:2693] 2024-09-09 17:53:13,357 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1580
+ {'eval_loss': 0.007998762652277946, 'eval_precision': 0.9211538461538461, 'eval_recall': 0.9273959341723137, 'eval_f1': 0.9242643511818619, 'eval_accuracy': 0.9986302259153467, 'eval_runtime': 17.8292, 'eval_samples_per_second': 381.284, 'eval_steps_per_second': 47.675, 'epoch': 9.99}
1581
+ {'train_runtime': 2471.8222, 'train_samples_per_second': 116.817, 'train_steps_per_second': 1.825, 'train_loss': 0.002201940788383577, 'epoch': 9.99}
1582
+