File size: 3,334 Bytes
89f5bbb
 
 
 
33e7066
 
 
 
 
89f5bbb
 
 
 
 
 
 
 
 
 
33e7066
 
 
 
 
 
 
89f5bbb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33e7066
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89f5bbb
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Yepes_5e-05_0404_ES6
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Yepes_5e-05_0404_ES6

This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0941
- Precision: 0.6541
- Recall: 0.5210
- F1: 0.5800
- Accuracy: 0.9813

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5973        | 0.43  | 25   | 0.1959          | 0.0       | 0.0    | 0.0    | 0.9705   |
| 0.1831        | 0.86  | 50   | 0.1374          | 0.0       | 0.0    | 0.0    | 0.9705   |
| 0.1271        | 1.29  | 75   | 0.1182          | 0.2786    | 0.1677 | 0.2093 | 0.9735   |
| 0.13          | 1.72  | 100  | 0.1116          | 0.4057    | 0.2964 | 0.3426 | 0.9772   |
| 0.1008        | 2.16  | 125  | 0.1013          | 0.4491    | 0.2904 | 0.3527 | 0.9781   |
| 0.0807        | 2.59  | 150  | 0.0992          | 0.4214    | 0.3533 | 0.3844 | 0.9775   |
| 0.0893        | 3.02  | 175  | 0.0855          | 0.4937    | 0.3503 | 0.4098 | 0.9789   |
| 0.0656        | 3.45  | 200  | 0.0978          | 0.5509    | 0.3563 | 0.4327 | 0.9803   |
| 0.0723        | 3.88  | 225  | 0.0816          | 0.4925    | 0.3922 | 0.4367 | 0.9798   |
| 0.0683        | 4.31  | 250  | 0.0789          | 0.6389    | 0.4132 | 0.5018 | 0.9815   |
| 0.0518        | 4.74  | 275  | 0.0838          | 0.5639    | 0.3832 | 0.4563 | 0.9797   |
| 0.0534        | 5.17  | 300  | 0.0853          | 0.7129    | 0.4461 | 0.5488 | 0.9817   |
| 0.0489        | 5.6   | 325  | 0.0824          | 0.6239    | 0.4222 | 0.5036 | 0.9814   |
| 0.0442        | 6.03  | 350  | 0.0751          | 0.5789    | 0.4940 | 0.5331 | 0.9799   |
| 0.0353        | 6.47  | 375  | 0.1195          | 0.6812    | 0.4222 | 0.5213 | 0.9803   |
| 0.0401        | 6.9   | 400  | 0.0875          | 0.5339    | 0.5419 | 0.5379 | 0.9767   |
| 0.0341        | 7.33  | 425  | 0.0994          | 0.6693    | 0.5090 | 0.5782 | 0.9815   |
| 0.0266        | 7.76  | 450  | 0.0951          | 0.6693    | 0.5150 | 0.5821 | 0.9815   |
| 0.0234        | 8.19  | 475  | 0.0979          | 0.6824    | 0.4760 | 0.5608 | 0.9817   |
| 0.0224        | 8.62  | 500  | 0.0941          | 0.6541    | 0.5210 | 0.5800 | 0.9813   |


### Framework versions

- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2