fatihbicer commited on
Commit
e537d3a
1 Parent(s): eb4f5c9

End of training

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.1901
19
 
20
  ## Model description
21
 
@@ -42,15 +42,17 @@ The following hyperparameters were used during training:
42
  - total_train_batch_size: 16
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - num_epochs: 2
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:-----:|:----:|:---------------:|
52
- | No log | 0.96 | 18 | 0.4368 |
53
- | No log | 1.92 | 36 | 0.1901 |
 
 
54
 
55
 
56
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.1530
19
 
20
  ## Model description
21
 
 
42
  - total_train_batch_size: 16
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - num_epochs: 4
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:------:|:----:|:---------------:|
52
+ | No log | 0.96 | 18 | 0.3476 |
53
+ | No log | 1.9733 | 37 | 0.1676 |
54
+ | No log | 2.9867 | 56 | 0.1567 |
55
+ | No log | 3.84 | 72 | 0.1530 |
56
 
57
 
58
  ### Framework versions