Rodrigo1771 commited on
Commit
8e76363
1 Parent(s): 75df356

Model save

Browse files
Files changed (3) hide show
  1. README.md +51 -13
  2. model.safetensors +1 -1
  3. train.log +10 -0
README.md CHANGED
@@ -2,13 +2,39 @@
2
  license: apache-2.0
3
  base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
4
  tags:
5
- - token-classification
6
  - generated_from_trainer
7
  datasets:
8
- - Rodrigo1771/multi-train-drugtemist-dev-ner
 
 
 
 
 
9
  model-index:
10
  - name: output
11
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,17 +42,13 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # output
18
 
19
- This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the Rodrigo1771/multi-train-drugtemist-dev-ner dataset.
20
  It achieves the following results on the evaluation set:
21
- - eval_loss: 2.4031
22
- - eval_precision: 0.0004
23
- - eval_recall: 0.0386
24
- - eval_f1: 0.0007
25
- - eval_accuracy: 0.0028
26
- - eval_runtime: 16.7962
27
- - eval_samples_per_second: 405.27
28
- - eval_steps_per_second: 50.666
29
- - step: 0
30
 
31
  ## Model description
32
 
@@ -55,6 +77,22 @@ The following hyperparameters were used during training:
55
  - lr_scheduler_type: linear
56
  - num_epochs: 10.0
57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  ### Framework versions
59
 
60
  - Transformers 4.40.2
 
2
  license: apache-2.0
3
  base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
+ - multi-train-drugtemist-dev-ner
8
+ metrics:
9
+ - precision
10
+ - recall
11
+ - f1
12
+ - accuracy
13
  model-index:
14
  - name: output
15
+ results:
16
+ - task:
17
+ name: Token Classification
18
+ type: token-classification
19
+ dataset:
20
+ name: multi-train-drugtemist-dev-ner
21
+ type: multi-train-drugtemist-dev-ner
22
+ config: MultiTrainDrugTEMISTDevNER
23
+ split: validation
24
+ args: MultiTrainDrugTEMISTDevNER
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.09270693512304251
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.9522058823529411
32
+ - name: F1
33
+ type: f1
34
+ value: 0.16896354888689555
35
+ - name: Accuracy
36
+ type: accuracy
37
+ value: 0.7845534874460183
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
42
 
43
  # output
44
 
45
+ This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the multi-train-drugtemist-dev-ner dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 1.7861
48
+ - Precision: 0.0927
49
+ - Recall: 0.9522
50
+ - F1: 0.1690
51
+ - Accuracy: 0.7846
 
 
 
 
52
 
53
  ## Model description
54
 
 
77
  - lr_scheduler_type: linear
78
  - num_epochs: 10.0
79
 
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
83
+ |:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
84
+ | 0.2596 | 0.9997 | 1701 | 0.7913 | 0.0793 | 0.9366 | 0.1462 | 0.7672 |
85
+ | 0.1853 | 2.0 | 3403 | 0.6631 | 0.0969 | 0.9485 | 0.1759 | 0.8100 |
86
+ | 0.1254 | 2.9997 | 5104 | 1.0729 | 0.0906 | 0.9421 | 0.1653 | 0.7755 |
87
+ | 0.0823 | 4.0 | 6806 | 1.2568 | 0.0888 | 0.9504 | 0.1624 | 0.7719 |
88
+ | 0.0597 | 4.9997 | 8507 | 1.1908 | 0.0941 | 0.9375 | 0.1710 | 0.7837 |
89
+ | 0.0446 | 6.0 | 10209 | 1.3844 | 0.0944 | 0.9504 | 0.1718 | 0.7812 |
90
+ | 0.0325 | 6.9997 | 11910 | 1.5515 | 0.0937 | 0.9476 | 0.1705 | 0.7866 |
91
+ | 0.022 | 8.0 | 13612 | 1.6300 | 0.0926 | 0.9559 | 0.1689 | 0.7843 |
92
+ | 0.017 | 8.9997 | 15313 | 1.7459 | 0.0929 | 0.9531 | 0.1693 | 0.7845 |
93
+ | 0.0135 | 9.9971 | 17010 | 1.7861 | 0.0927 | 0.9522 | 0.1690 | 0.7846 |
94
+
95
+
96
  ### Framework versions
97
 
98
  - Transformers 4.40.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0931a4af01357abb6f7dc6ad8c0e58400dcf281e4ea89e471d6ca8c508ad5be3
3
  size 496262556
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a92e9473c4569e5b9f9c2355692ca54a9ea2c391a97e47dae7d504f0230253df
3
  size 496262556
train.log CHANGED
@@ -1603,3 +1603,13 @@ Training completed. Do not forget to share your model on huggingface.co/models =
1603
  [INFO|trainer.py:2521] 2024-05-13 14:58:14,696 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-3403 (score: 0.17586912065439672).
1604
 
1605
 
1606
  [INFO|trainer.py:4035] 2024-05-13 14:58:14,891 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
 
 
 
 
 
 
 
 
 
 
 
1603
  [INFO|trainer.py:2521] 2024-05-13 14:58:14,696 >> Loading best model from /content/dissertation/scripts/ner/output/checkpoint-3403 (score: 0.17586912065439672).
1604
 
1605
 
1606
  [INFO|trainer.py:4035] 2024-05-13 14:58:14,891 >> Waiting for the current checkpoint push to be finished, this might take a couple of minutes.
1607
+ [INFO|trainer.py:3305] 2024-05-13 14:58:37,779 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1608
+ [INFO|configuration_utils.py:471] 2024-05-13 14:58:37,781 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1609
+ [INFO|modeling_utils.py:2590] 2024-05-13 14:58:39,085 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1610
+ [INFO|tokenization_utils_base.py:2488] 2024-05-13 14:58:39,087 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1611
+ [INFO|tokenization_utils_base.py:2497] 2024-05-13 14:58:39,087 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json
1612
+ [INFO|trainer.py:3305] 2024-05-13 14:58:39,136 >> Saving model checkpoint to /content/dissertation/scripts/ner/output
1613
+ [INFO|configuration_utils.py:471] 2024-05-13 14:58:39,137 >> Configuration saved in /content/dissertation/scripts/ner/output/config.json
1614
+ [INFO|modeling_utils.py:2590] 2024-05-13 14:58:40,302 >> Model weights saved in /content/dissertation/scripts/ner/output/model.safetensors
1615
+ [INFO|tokenization_utils_base.py:2488] 2024-05-13 14:58:40,303 >> tokenizer config file saved in /content/dissertation/scripts/ner/output/tokenizer_config.json
1616
+ [INFO|tokenization_utils_base.py:2497] 2024-05-13 14:58:40,303 >> Special tokens file saved in /content/dissertation/scripts/ner/output/special_tokens_map.json