File size: 3,252 Bytes
42f3566
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- esnli
metrics:
- accuracy
- f1
- rouge
- bleu
model-index:
- name: t5-small-e-snli-generation-label_and_explanation-selected-b48
  results:
  - task:
      name: Sequence-to-sequence Language Modeling
      type: text2text-generation
    dataset:
      name: esnli
      type: esnli
      config: plain_text
      split: validation
      args: plain_text
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.8657793131477342
    - name: F1
      type: f1
      value: 0.8658628497423001
    - name: Rouge1
      type: rouge
      value: 0.6049779979620054
    - name: Bleu
      type: bleu
      value: 0.4039391893498565
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# t5-small-e-snli-generation-label_and_explanation-selected-b48

This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the esnli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9091
- Accuracy: 0.8658
- F1: 0.8659
- Bertscore F1: 0.9337
- Rouge1: 0.6050
- Rouge2: 0.3983
- Rougel: 0.5492
- Rougelsum: 0.5513
- Bleu: 0.4039

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy | F1     | Bertscore F1 | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu   |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------------:|:------:|:------:|:------:|:---------:|:------:|
| 1.7285        | 0.17  | 2000  | 1.9945          | 0.7799   | 0.7792 | 0.9249       | 0.5631 | 0.3517 | 0.5091 | 0.5116    | 0.3617 |
| 1.3318        | 0.35  | 4000  | 1.9494          | 0.7980   | 0.7971 | 0.9295       | 0.5766 | 0.3656 | 0.5218 | 0.5234    | 0.3785 |
| 1.2662        | 0.52  | 6000  | 1.8983          | 0.8322   | 0.8331 | 0.9289       | 0.5769 | 0.3656 | 0.5205 | 0.5225    | 0.3727 |
| 1.2285        | 0.7   | 8000  | 1.9078          | 0.8391   | 0.8396 | 0.9313       | 0.5833 | 0.3734 | 0.5304 | 0.5321    | 0.3884 |
| 1.1973        | 0.87  | 10000 | 1.9246          | 0.8485   | 0.8470 | 0.9303       | 0.5888 | 0.3782 | 0.5322 | 0.5339    | 0.3868 |
| 1.1715        | 1.05  | 12000 | 1.9262          | 0.8561   | 0.8565 | 0.9331       | 0.6020 | 0.3950 | 0.5464 | 0.5479    | 0.4039 |
| 1.1368        | 1.22  | 14000 | 1.9155          | 0.8621   | 0.8612 | 0.9313       | 0.6027 | 0.3918 | 0.5442 | 0.5463    | 0.3889 |
| 1.1281        | 1.4   | 16000 | 1.9091          | 0.8658   | 0.8659 | 0.9337       | 0.6050 | 0.3983 | 0.5492 | 0.5513    | 0.4039 |


### Framework versions

- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2