File size: 3,571 Bytes
0070083
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
license: mit
tags:
- generated_from_trainer
datasets:
- esnli
metrics:
- f1
- accuracy
model-index:
- name: roberta-base-e-snli-classification-nli-base
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: esnli
      type: esnli
      config: plain_text
      split: validation
      args: plain_text
    metrics:
    - name: F1
      type: f1
      value: 0.9108298866502319
    - name: Accuracy
      type: accuracy
      value: 0.9109937004673847
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# roberta-base-e-snli-classification-nli-base

This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the esnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2611
- F1: 0.9108
- Accuracy: 0.9110

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step  | Validation Loss | F1     | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 1.0317        | 0.05  | 400   | 0.5734          | 0.7771 | 0.7803   |
| 0.544         | 0.09  | 800   | 0.3994          | 0.8548 | 0.8555   |
| 0.4604        | 0.14  | 1200  | 0.3492          | 0.8681 | 0.8687   |
| 0.4235        | 0.19  | 1600  | 0.3323          | 0.8764 | 0.8777   |
| 0.3934        | 0.23  | 2000  | 0.3225          | 0.8831 | 0.8841   |
| 0.3863        | 0.28  | 2400  | 0.3086          | 0.8875 | 0.8872   |
| 0.3767        | 0.33  | 2800  | 0.2972          | 0.8892 | 0.8898   |
| 0.3726        | 0.37  | 3200  | 0.2910          | 0.8932 | 0.8936   |
| 0.3624        | 0.42  | 3600  | 0.2934          | 0.8934 | 0.8937   |
| 0.361         | 0.47  | 4000  | 0.2831          | 0.8989 | 0.8989   |
| 0.3553        | 0.51  | 4400  | 0.2905          | 0.8985 | 0.8993   |
| 0.3451        | 0.56  | 4800  | 0.2725          | 0.9019 | 0.9024   |
| 0.3475        | 0.61  | 5200  | 0.2712          | 0.9046 | 0.9051   |
| 0.3398        | 0.65  | 5600  | 0.2787          | 0.9024 | 0.9028   |
| 0.3322        | 0.7   | 6000  | 0.2697          | 0.9043 | 0.9046   |
| 0.3288        | 0.75  | 6400  | 0.2722          | 0.9006 | 0.9013   |
| 0.324         | 0.79  | 6800  | 0.2677          | 0.9066 | 0.9066   |
| 0.3335        | 0.84  | 7200  | 0.2629          | 0.9075 | 0.9077   |
| 0.3309        | 0.89  | 7600  | 0.2577          | 0.9058 | 0.9061   |
| 0.3236        | 0.93  | 8000  | 0.2561          | 0.9121 | 0.9121   |
| 0.3183        | 0.98  | 8400  | 0.2556          | 0.9084 | 0.9088   |
| 0.3022        | 1.03  | 8800  | 0.2668          | 0.9056 | 0.9064   |
| 0.2974        | 1.07  | 9200  | 0.2519          | 0.9087 | 0.9092   |
| 0.29          | 1.12  | 9600  | 0.2554          | 0.9103 | 0.9109   |
| 0.2855        | 1.16  | 10000 | 0.2611          | 0.9108 | 0.9110   |


### Framework versions

- Transformers 4.27.1
- Pytorch 1.12.1+cu113
- Datasets 2.10.1
- Tokenizers 0.13.2