File size: 2,983 Bytes
c7adb28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bertin-roberta-base-spanish-finetuned-recores3
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# bertin-roberta-base-spanish-finetuned-recores3

This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0975
- Accuracy: 0.3884

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 25

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6095        | 1.0   | 524   | 1.6094          | 0.2342   |
| 1.607         | 2.0   | 1048  | 1.5612          | 0.3058   |
| 1.4059        | 3.0   | 1572  | 1.6292          | 0.3361   |
| 0.7047        | 4.0   | 2096  | 2.5111          | 0.4132   |
| 0.2671        | 5.0   | 2620  | 3.2399          | 0.3499   |
| 0.1065        | 6.0   | 3144  | 5.1217          | 0.3444   |
| 0.0397        | 7.0   | 3668  | 4.3270          | 0.3691   |
| 0.0162        | 8.0   | 4192  | 5.1796          | 0.3719   |
| 0.0096        | 9.0   | 4716  | 5.2161          | 0.3994   |
| 0.0118        | 10.0  | 5240  | 4.9225          | 0.3719   |
| 0.0015        | 11.0  | 5764  | 5.0544          | 0.3829   |
| 0.0091        | 12.0  | 6288  | 5.7731          | 0.3884   |
| 0.0052        | 13.0  | 6812  | 4.1606          | 0.3939   |
| 0.0138        | 14.0  | 7336  | 6.2725          | 0.3857   |
| 0.0027        | 15.0  | 7860  | 6.2274          | 0.3857   |
| 0.0003        | 16.0  | 8384  | 6.0935          | 0.4022   |
| 0.0002        | 17.0  | 8908  | 5.7650          | 0.3994   |
| 0.0           | 18.0  | 9432  | 6.3595          | 0.4215   |
| 0.0           | 19.0  | 9956  | 5.8934          | 0.3747   |
| 0.0001        | 20.0  | 10480 | 6.0571          | 0.3884   |
| 0.0           | 21.0  | 11004 | 6.0718          | 0.3884   |
| 0.0           | 22.0  | 11528 | 6.0844          | 0.3884   |
| 0.0           | 23.0  | 12052 | 6.0930          | 0.3884   |
| 0.0           | 24.0  | 12576 | 6.0966          | 0.3884   |
| 0.0           | 25.0  | 13100 | 6.0975          | 0.3884   |


### Framework versions

- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1