update model card README.md
Browse files
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
tags:
|
4 |
+
- generated_from_trainer
|
5 |
+
datasets:
|
6 |
+
- generator
|
7 |
+
model-index:
|
8 |
+
- name: gpt2_left_out_switchboard
|
9 |
+
results: []
|
10 |
+
---
|
11 |
+
|
12 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
13 |
+
should probably proofread and complete it, then remove this comment. -->
|
14 |
+
|
15 |
+
# gpt2_left_out_switchboard
|
16 |
+
|
17 |
+
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
|
18 |
+
It achieves the following results on the evaluation set:
|
19 |
+
- Loss: 3.9378
|
20 |
+
|
21 |
+
## Model description
|
22 |
+
|
23 |
+
More information needed
|
24 |
+
|
25 |
+
## Intended uses & limitations
|
26 |
+
|
27 |
+
More information needed
|
28 |
+
|
29 |
+
## Training and evaluation data
|
30 |
+
|
31 |
+
More information needed
|
32 |
+
|
33 |
+
## Training procedure
|
34 |
+
|
35 |
+
### Training hyperparameters
|
36 |
+
|
37 |
+
The following hyperparameters were used during training:
|
38 |
+
- learning_rate: 0.0005
|
39 |
+
- train_batch_size: 64
|
40 |
+
- eval_batch_size: 64
|
41 |
+
- seed: 42
|
42 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
43 |
+
- lr_scheduler_type: cosine
|
44 |
+
- lr_scheduler_warmup_steps: 1000
|
45 |
+
- num_epochs: 10
|
46 |
+
- mixed_precision_training: Native AMP
|
47 |
+
|
48 |
+
### Training results
|
49 |
+
|
50 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
51 |
+
|:-------------:|:-----:|:-----:|:---------------:|
|
52 |
+
| 5.983 | 0.24 | 500 | 5.0786 |
|
53 |
+
| 4.7603 | 0.48 | 1000 | 4.6865 |
|
54 |
+
| 4.4521 | 0.73 | 1500 | 4.4635 |
|
55 |
+
| 4.2512 | 0.97 | 2000 | 4.3124 |
|
56 |
+
| 4.0458 | 1.21 | 2500 | 4.2272 |
|
57 |
+
| 3.9687 | 1.45 | 3000 | 4.1443 |
|
58 |
+
| 3.9024 | 1.69 | 3500 | 4.0705 |
|
59 |
+
| 3.8439 | 1.93 | 4000 | 4.0057 |
|
60 |
+
| 3.6791 | 2.18 | 4500 | 3.9845 |
|
61 |
+
| 3.6259 | 2.42 | 5000 | 3.9471 |
|
62 |
+
| 3.6137 | 2.66 | 5500 | 3.9057 |
|
63 |
+
| 3.592 | 2.9 | 6000 | 3.8654 |
|
64 |
+
| 3.4438 | 3.14 | 6500 | 3.8758 |
|
65 |
+
| 3.3844 | 3.38 | 7000 | 3.8570 |
|
66 |
+
| 3.3977 | 3.63 | 7500 | 3.8324 |
|
67 |
+
| 3.4015 | 3.87 | 8000 | 3.8053 |
|
68 |
+
| 3.2638 | 4.11 | 8500 | 3.8300 |
|
69 |
+
| 3.1771 | 4.35 | 9000 | 3.8250 |
|
70 |
+
| 3.1914 | 4.59 | 9500 | 3.8070 |
|
71 |
+
| 3.1993 | 4.84 | 10000 | 3.7853 |
|
72 |
+
| 3.1089 | 5.08 | 10500 | 3.8146 |
|
73 |
+
| 2.9539 | 5.32 | 11000 | 3.8262 |
|
74 |
+
| 2.9853 | 5.56 | 11500 | 3.8173 |
|
75 |
+
| 2.9984 | 5.8 | 12000 | 3.8020 |
|
76 |
+
| 2.9462 | 6.04 | 12500 | 3.8259 |
|
77 |
+
| 2.7343 | 6.29 | 13000 | 3.8527 |
|
78 |
+
| 2.7724 | 6.53 | 13500 | 3.8499 |
|
79 |
+
| 2.7817 | 6.77 | 14000 | 3.8423 |
|
80 |
+
| 2.7789 | 7.01 | 14500 | 3.8510 |
|
81 |
+
| 2.5477 | 7.25 | 15000 | 3.8873 |
|
82 |
+
| 2.5643 | 7.5 | 15500 | 3.8904 |
|
83 |
+
| 2.5842 | 7.74 | 16000 | 3.8896 |
|
84 |
+
| 2.5913 | 7.98 | 16500 | 3.8858 |
|
85 |
+
| 2.4293 | 8.22 | 17000 | 3.9177 |
|
86 |
+
| 2.4253 | 8.46 | 17500 | 3.9231 |
|
87 |
+
| 2.4274 | 8.7 | 18000 | 3.9240 |
|
88 |
+
| 2.4331 | 8.95 | 18500 | 3.9254 |
|
89 |
+
| 2.362 | 9.19 | 19000 | 3.9346 |
|
90 |
+
| 2.3519 | 9.43 | 19500 | 3.9373 |
|
91 |
+
| 2.3498 | 9.67 | 20000 | 3.9378 |
|
92 |
+
| 2.3461 | 9.91 | 20500 | 3.9378 |
|
93 |
+
|
94 |
+
|
95 |
+
### Framework versions
|
96 |
+
|
97 |
+
- Transformers 4.26.1
|
98 |
+
- Pytorch 1.11.0+cu113
|
99 |
+
- Datasets 2.13.0
|
100 |
+
- Tokenizers 0.13.3
|