1 ---
2 language:
3 - en
4 license: apache-2.0
5 tags:
6 - generated_from_trainer
7 datasets:
8 - glue
9 metrics:
10 - accuracy
11 - f1
12 model-index:
13 - name: bert-large-cased-finetuned-mrpc
14 results:
15 - task:
16 name: Text Classification
17 type: text-classification
18 dataset:
19 name: GLUE MRPC
20 type: glue
21 args: mrpc
22 metrics:
23 - name: Accuracy
24 type: accuracy
25 value: 0.6838235294117647
26 - name: F1
27 type: f1
28 value: 0.8122270742358079
29 ---
30
31 <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32 should probably proofread and complete it, then remove this comment. -->
33
34 # bert-large-cased-finetuned-mrpc
35
36 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE MRPC dataset.
37 It achieves the following results on the evaluation set:
38 - Loss: 0.6274
39 - Accuracy: 0.6838
40 - F1: 0.8122
41 - Combined Score: 0.7480
42
43 ## Model description
44
45 More information needed
46
47 ## Intended uses & limitations
48
49 More information needed
50
51 ## Training and evaluation data
52
53 More information needed
54
55 ## Training procedure
56
57 ### Training hyperparameters
58
59 The following hyperparameters were used during training:
60 - learning_rate: 2e-05
61 - train_batch_size: 4
62 - eval_batch_size: 8
63 - seed: 42
64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65 - lr_scheduler_type: linear
66 - num_epochs: 5.0
67
68 ### Training results
69
70 | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
71 |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
72 | 0.6441 | 1.0 | 917 | 0.6370 | 0.6838 | 0.8122 | 0.7480 |
73 | 0.6451 | 2.0 | 1834 | 0.6553 | 0.6838 | 0.8122 | 0.7480 |
74 | 0.6428 | 3.0 | 2751 | 0.6332 | 0.6838 | 0.8122 | 0.7480 |
75 | 0.6476 | 4.0 | 3668 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
76 | 0.6499 | 5.0 | 4585 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
77
78
79 ### Framework versions
80
81 - Transformers 4.11.0.dev0
82 - Pytorch 1.9.0
83 - Datasets 1.12.1
84 - Tokenizers 0.10.3
85