Sakonii commited on
Commit
37204cc
1 Parent(s): e71165a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -12,9 +12,9 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # distilgpt2-nepali
14
 
15
- This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.7299
18
 
19
  ## Model description
20
 
@@ -39,14 +39,16 @@ The following hyperparameters were used during training:
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 1
43
  - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
47
- | Training Loss | Epoch | Step | Validation Loss |
48
- |:-------------:|:-----:|:-----:|:---------------:|
49
- | 3.869 | 1.0 | 94395 | 3.7299 |
 
 
50
 
51
 
52
  ### Framework versions
 
12
 
13
  # distilgpt2-nepali
14
 
15
+ This model is a fine-tuned version of [Sakonii/distilgpt2-nepali](https://huggingface.co/Sakonii/distilgpt2-nepali) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 3.3749
18
 
19
  ## Model description
20
 
 
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 3
43
  - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:------:|:---------------:|
49
+ | 3.7645 | 1.0 | 94395 | 3.6291 |
50
+ | 3.5857 | 2.0 | 188790 | 3.4442 |
51
+ | 3.505 | 3.0 | 283185 | 3.3749 |
52
 
53
 
54
  ### Framework versions