1 ---
2 tags:
3 - generated_from_trainer
4 datasets:
5 - klue
6 metrics:
7 - f1
8 model-index:
9 - name: bert-base-finetuned-ynat
10 results:
11 - task:
12 name: Text Classification
13 type: text-classification
14 dataset:
15 name: klue
16 type: klue
17 args: ynat
18 metrics:
19 - name: F1
20 type: f1
21 value: 0.8669116640755216
22 ---
23
24 <!-- This model card has been generated automatically according to the information the Trainer had access to. You
25 should probably proofread and complete it, then remove this comment. -->
26
27 # bert-base-finetuned-ynat
28
29 This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
30 It achieves the following results on the evaluation set:
31 - Loss: 0.3710
32 - F1: 0.8669
33
34 ## Model description
35
36 More information needed
37
38 ## Intended uses & limitations
39
40 More information needed
41
42 ## Training and evaluation data
43
44 More information needed
45
46 ## Training procedure
47
48 ### Training hyperparameters
49
50 The following hyperparameters were used during training:
51 - learning_rate: 2e-05
52 - train_batch_size: 256
53 - eval_batch_size: 256
54 - seed: 42
55 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56 - lr_scheduler_type: linear
57 - num_epochs: 5
58
59 ### Training results
60
61 | Training Loss | Epoch | Step | Validation Loss | F1 |
62 |:-------------:|:-----:|:----:|:---------------:|:------:|
63 | No log | 1.0 | 179 | 0.4223 | 0.8549 |
64 | No log | 2.0 | 358 | 0.3710 | 0.8669 |
65 | 0.2576 | 3.0 | 537 | 0.3891 | 0.8631 |
66 | 0.2576 | 4.0 | 716 | 0.3968 | 0.8612 |
67 | 0.2576 | 5.0 | 895 | 0.4044 | 0.8617 |
68
69
70 ### Framework versions
71
72 - Transformers 4.10.3
73 - Pytorch 1.9.0+cu102
74 - Datasets 1.12.1
75 - Tokenizers 0.10.3
76