File size: 2,720 Bytes
7a26b1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distill-pegasus-cnn-16-4-sec
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distill-pegasus-cnn-16-4-sec

This model is a fine-tuned version of [sshleifer/distill-pegasus-cnn-16-4](https://huggingface.co/sshleifer/distill-pegasus-cnn-16-4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0146
- Rouge1: 48.3239
- Rouge2: 34.4713
- Rougel: 43.5113
- Rougelsum: 46.371
- Gen Len: 106.98

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log        | 1.0   | 99   | 3.0918          | 20.297  | 6.5201  | 16.1329 | 18.0062   | 64.38   |
| No log        | 2.0   | 198  | 2.4999          | 23.2475 | 10.4548 | 19.4955 | 21.3927   | 73.92   |
| No log        | 3.0   | 297  | 2.0991          | 25.1919 | 13.2866 | 22.1497 | 23.7988   | 80.5    |
| No log        | 4.0   | 396  | 1.7855          | 29.3799 | 17.4892 | 26.0768 | 27.3547   | 84.08   |
| No log        | 5.0   | 495  | 1.5388          | 34.3057 | 21.5888 | 30.043  | 32.1758   | 98.26   |
| 2.7981        | 6.0   | 594  | 1.3553          | 36.5817 | 22.9587 | 32.0113 | 34.3963   | 95.02   |
| 2.7981        | 7.0   | 693  | 1.2281          | 37.9149 | 24.4547 | 33.9621 | 35.7424   | 90.04   |
| 2.7981        | 8.0   | 792  | 1.1430          | 40.9219 | 27.4248 | 36.1746 | 38.8887   | 96.56   |
| 2.7981        | 9.0   | 891  | 1.0844          | 43.935  | 29.7536 | 38.63   | 41.6618   | 98.7    |
| 2.7981        | 10.0  | 990  | 1.0472          | 45.3353 | 32.042  | 40.8945 | 43.3416   | 106.22  |
| 1.5684        | 11.0  | 1089 | 1.0254          | 47.6564 | 34.3221 | 43.1757 | 45.7094   | 107.88  |
| 1.5684        | 12.0  | 1188 | 1.0146          | 48.3239 | 34.4713 | 43.5113 | 46.371    | 106.98  |


### Framework versions

- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1