File size: 3,708 Bytes
bb46d1b
1afd622
 
bb46d1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1afd622
bb46d1b
 
 
 
 
 
 
1afd622
bb46d1b
 
1afd622
bb46d1b
 
 
 
 
 
 
1afd622
bb46d1b
1afd622
 
 
 
bb46d1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_mrpc_256
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: GLUE MRPC
      type: glue
      config: mrpc
      split: validation
      args: mrpc
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.7107843137254902
    - name: F1
      type: f1
      value: 0.8233532934131738
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distilbert_add_GLUE_Experiment_mrpc_256

This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5932
- Accuracy: 0.7108
- F1: 0.8234
- Combined Score: 0.7671

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.637         | 1.0   | 15   | 0.6242          | 0.6838   | 0.8122 | 0.7480         |
| 0.629         | 2.0   | 30   | 0.6240          | 0.6838   | 0.8122 | 0.7480         |
| 0.6302        | 3.0   | 45   | 0.6248          | 0.6838   | 0.8122 | 0.7480         |
| 0.63          | 4.0   | 60   | 0.6241          | 0.6838   | 0.8122 | 0.7480         |
| 0.6323        | 5.0   | 75   | 0.6240          | 0.6838   | 0.8122 | 0.7480         |
| 0.6299        | 6.0   | 90   | 0.6243          | 0.6838   | 0.8122 | 0.7480         |
| 0.6325        | 7.0   | 105  | 0.6239          | 0.6838   | 0.8122 | 0.7480         |
| 0.6301        | 8.0   | 120  | 0.6239          | 0.6838   | 0.8122 | 0.7480         |
| 0.6324        | 9.0   | 135  | 0.6240          | 0.6838   | 0.8122 | 0.7480         |
| 0.6293        | 10.0  | 150  | 0.6240          | 0.6838   | 0.8122 | 0.7480         |
| 0.6307        | 11.0  | 165  | 0.6239          | 0.6838   | 0.8122 | 0.7480         |
| 0.6302        | 12.0  | 180  | 0.6240          | 0.6838   | 0.8122 | 0.7480         |
| 0.6338        | 13.0  | 195  | 0.6237          | 0.6838   | 0.8122 | 0.7480         |
| 0.6281        | 14.0  | 210  | 0.6225          | 0.6838   | 0.8122 | 0.7480         |
| 0.6263        | 15.0  | 225  | 0.6183          | 0.6838   | 0.8122 | 0.7480         |
| 0.6017        | 16.0  | 240  | 0.5932          | 0.7108   | 0.8234 | 0.7671         |
| 0.5213        | 17.0  | 255  | 0.6146          | 0.6642   | 0.7540 | 0.7091         |
| 0.4383        | 18.0  | 270  | 0.6405          | 0.6912   | 0.7842 | 0.7377         |
| 0.3903        | 19.0  | 285  | 0.6910          | 0.6912   | 0.7872 | 0.7392         |
| 0.363         | 20.0  | 300  | 0.7221          | 0.6544   | 0.7374 | 0.6959         |
| 0.3306        | 21.0  | 315  | 0.7583          | 0.6863   | 0.7808 | 0.7335         |


### Framework versions

- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2