k4black commited on
Commit
0070083
1 Parent(s): 7d7afb0

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - esnli
7
+ metrics:
8
+ - f1
9
+ - accuracy
10
+ model-index:
11
+ - name: roberta-base-e-snli-classification-nli-base
12
+ results:
13
+ - task:
14
+ name: Text Classification
15
+ type: text-classification
16
+ dataset:
17
+ name: esnli
18
+ type: esnli
19
+ config: plain_text
20
+ split: validation
21
+ args: plain_text
22
+ metrics:
23
+ - name: F1
24
+ type: f1
25
+ value: 0.9108298866502319
26
+ - name: Accuracy
27
+ type: accuracy
28
+ value: 0.9109937004673847
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # roberta-base-e-snli-classification-nli-base
35
+
36
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the esnli dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.2611
39
+ - F1: 0.9108
40
+ - Accuracy: 0.9110
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 1e-05
60
+ - train_batch_size: 64
61
+ - eval_batch_size: 64
62
+ - seed: 42
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_ratio: 0.05
66
+ - num_epochs: 3
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
72
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
73
+ | 1.0317 | 0.05 | 400 | 0.5734 | 0.7771 | 0.7803 |
74
+ | 0.544 | 0.09 | 800 | 0.3994 | 0.8548 | 0.8555 |
75
+ | 0.4604 | 0.14 | 1200 | 0.3492 | 0.8681 | 0.8687 |
76
+ | 0.4235 | 0.19 | 1600 | 0.3323 | 0.8764 | 0.8777 |
77
+ | 0.3934 | 0.23 | 2000 | 0.3225 | 0.8831 | 0.8841 |
78
+ | 0.3863 | 0.28 | 2400 | 0.3086 | 0.8875 | 0.8872 |
79
+ | 0.3767 | 0.33 | 2800 | 0.2972 | 0.8892 | 0.8898 |
80
+ | 0.3726 | 0.37 | 3200 | 0.2910 | 0.8932 | 0.8936 |
81
+ | 0.3624 | 0.42 | 3600 | 0.2934 | 0.8934 | 0.8937 |
82
+ | 0.361 | 0.47 | 4000 | 0.2831 | 0.8989 | 0.8989 |
83
+ | 0.3553 | 0.51 | 4400 | 0.2905 | 0.8985 | 0.8993 |
84
+ | 0.3451 | 0.56 | 4800 | 0.2725 | 0.9019 | 0.9024 |
85
+ | 0.3475 | 0.61 | 5200 | 0.2712 | 0.9046 | 0.9051 |
86
+ | 0.3398 | 0.65 | 5600 | 0.2787 | 0.9024 | 0.9028 |
87
+ | 0.3322 | 0.7 | 6000 | 0.2697 | 0.9043 | 0.9046 |
88
+ | 0.3288 | 0.75 | 6400 | 0.2722 | 0.9006 | 0.9013 |
89
+ | 0.324 | 0.79 | 6800 | 0.2677 | 0.9066 | 0.9066 |
90
+ | 0.3335 | 0.84 | 7200 | 0.2629 | 0.9075 | 0.9077 |
91
+ | 0.3309 | 0.89 | 7600 | 0.2577 | 0.9058 | 0.9061 |
92
+ | 0.3236 | 0.93 | 8000 | 0.2561 | 0.9121 | 0.9121 |
93
+ | 0.3183 | 0.98 | 8400 | 0.2556 | 0.9084 | 0.9088 |
94
+ | 0.3022 | 1.03 | 8800 | 0.2668 | 0.9056 | 0.9064 |
95
+ | 0.2974 | 1.07 | 9200 | 0.2519 | 0.9087 | 0.9092 |
96
+ | 0.29 | 1.12 | 9600 | 0.2554 | 0.9103 | 0.9109 |
97
+ | 0.2855 | 1.16 | 10000 | 0.2611 | 0.9108 | 0.9110 |
98
+
99
+
100
+ ### Framework versions
101
+
102
+ - Transformers 4.27.1
103
+ - Pytorch 1.12.1+cu113
104
+ - Datasets 2.10.1
105
+ - Tokenizers 0.13.2