File size: 2,646 Bytes
824fe7e
 
 
 
ce03f89
824fe7e
 
 
 
 
 
 
 
 
 
30b5a92
824fe7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: apache-2.0
tags:
- generated_from_trainer
datasets: din0s/asqa
model-index:
- name: t5-base-pt-asqa-ob
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# t5-base-pt-asqa-ob

This model is a fine-tuned version of [din0s/t5-base-msmarco-nlgen-ob](https://huggingface.co/din0s/t5-base-msmarco-nlgen-ob) on the [ASQA](https://huggingface.co/datasets/din0s/asqa) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7481
- Rougelsum: 12.3722

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log        | 1.0   | 355  | 1.8760          | 11.5138   |
| 2.1344        | 2.0   | 710  | 1.8322          | 11.6843   |
| 1.979         | 3.0   | 1065 | 1.8109          | 11.8592   |
| 1.979         | 4.0   | 1420 | 1.7967          | 11.9466   |
| 1.9493        | 5.0   | 1775 | 1.7871          | 12.0333   |
| 1.9099        | 6.0   | 2130 | 1.7778          | 12.0805   |
| 1.9099        | 7.0   | 2485 | 1.7720          | 12.1659   |
| 1.8748        | 8.0   | 2840 | 1.7668          | 12.2039   |
| 1.8584        | 9.0   | 3195 | 1.7628          | 12.2506   |
| 1.8362        | 10.0  | 3550 | 1.7601          | 12.2557   |
| 1.8362        | 11.0  | 3905 | 1.7575          | 12.2718   |
| 1.8134        | 12.0  | 4260 | 1.7562          | 12.2789   |
| 1.7996        | 13.0  | 4615 | 1.7538          | 12.3179   |
| 1.7996        | 14.0  | 4970 | 1.7529          | 12.3035   |
| 1.8049        | 15.0  | 5325 | 1.7519          | 12.3317   |
| 1.7898        | 16.0  | 5680 | 1.7510          | 12.3717   |
| 1.7872        | 17.0  | 6035 | 1.7497          | 12.3750   |
| 1.7872        | 18.0  | 6390 | 1.7486          | 12.3580   |
| 1.7759        | 19.0  | 6745 | 1.7483          | 12.3698   |
| 1.785         | 20.0  | 7100 | 1.7481          | 12.3722   |


### Framework versions

- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1