1 ---
2 tags:
3 - generated_from_trainer
4 datasets:
5 - klue
6 metrics:
7 - pearsonr
8 - f1
9 model-index:
10 - name: bert-base-finetuned-sts
11 results:
12 - task:
13 name: Text Classification
14 type: text-classification
15 dataset:
16 name: klue
17 type: klue
18 args: sts
19 metrics:
20 - name: Pearsonr
21 type: pearsonr
22 value: 0.8756147003619346
23 - name: F1
24 type: f1
25 value: 0.8416666666666667
26 ---
27
28 <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29 should probably proofread and complete it, then remove this comment. -->
30
31 # bert-base-finetuned-sts
32
33 This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
34 It achieves the following results on the evaluation set:
35 - Loss: 0.4115
36 - Pearsonr: 0.8756
37 - F1: 0.8417
38
39 ## Model description
40
41 More information needed
42
43 ## Intended uses & limitations
44
45 More information needed
46
47 ## Training and evaluation data
48
49 More information needed
50
51 ## Training procedure
52
53 ### Training hyperparameters
54
55 The following hyperparameters were used during training:
56 - learning_rate: 5e-05
57 - train_batch_size: 32
58 - eval_batch_size: 128
59 - seed: 42
60 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61 - lr_scheduler_type: linear
62 - lr_scheduler_warmup_ratio: 0.1
63 - num_epochs: 4
64
65 ### Training results
66
67 | Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 |
68 |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
69 | 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 |
70 | 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 |
71 | 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 |
72 | 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 |
73
74
75 ### Framework versions
76
77 - Transformers 4.10.2
78 - Pytorch 1.7.1
79 - Datasets 1.12.1
80 - Tokenizers 0.10.3
81