b09501048 commited on
Commit
5bb1746
1 Parent(s): fbcc994

Model save

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -12,7 +12,9 @@ model-index:
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/u6lcqmvu)
 
 
16
  # ADL_HW2_GPT2
17
 
18
  This model is a fine-tuned version of [ckiplab/gpt2-tiny-chinese](https://huggingface.co/ckiplab/gpt2-tiny-chinese) on an unknown dataset.
@@ -35,12 +37,12 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 5.6e-05
38
- - train_batch_size: 1
39
- - eval_batch_size: 1
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - num_epochs: 2
44
 
45
  ### Framework versions
46
 
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/ov47z0tg)
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/ov47z0tg)
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_HW2_RT_GPT2/runs/ov47z0tg)
18
  # ADL_HW2_GPT2
19
 
20
  This model is a fine-tuned version of [ckiplab/gpt2-tiny-chinese](https://huggingface.co/ckiplab/gpt2-tiny-chinese) on an unknown dataset.
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 5.6e-05
40
+ - train_batch_size: 32
41
+ - eval_batch_size: 32
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - num_epochs: 8
46
 
47
  ### Framework versions
48