File size: 2,255 Bytes
fa7096b 2a3e1be fa7096b 2a3e1be fa7096b 2a3e1be fa7096b 2a3e1be fa7096b 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 2a3e1be 7735f02 fa7096b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
base_model: bert-base-cased
model-index:
- name: glue-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.8553921568627451
name: Accuracy
- type: f1
value: 0.897391304347826
name: F1
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- type: accuracy
value: 0.8553921568627451
name: Accuracy
verified: true
- type: precision
value: 0.8716216216216216
name: Precision
verified: true
- type: recall
value: 0.9247311827956989
name: Recall
verified: true
- type: auc
value: 0.90464282737351
name: AUC
verified: true
- type: f1
value: 0.897391304347826
name: F1
verified: true
- type: loss
value: 0.6564616560935974
name: loss
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6566
- Accuracy: 0.8554
- F1: 0.8974
- Combined Score: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|