minwooeom commited on
Commit
2a1fa47
1 Parent(s): e11d56b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -10
README.md CHANGED
@@ -14,9 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # t5-qg
16
 
17
- This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the squad_modified_for_t5_qg dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 2.2054
20
 
21
  ## Model description
22
 
@@ -35,10 +33,12 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
  - train_batch_size: 8
40
  - eval_batch_size: 8
41
  - seed: 42
 
 
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
@@ -46,12 +46,6 @@ The following hyperparameters were used during training:
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 2.8719 | 0.21 | 500 | 2.5249 |
52
- | 2.5501 | 0.42 | 1000 | 2.3358 |
53
- | 2.4402 | 0.64 | 1500 | 2.2440 |
54
- | 2.4095 | 0.85 | 2000 | 2.2054 |
55
 
56
 
57
  ### Framework versions
 
14
 
15
  # t5-qg
16
 
17
+ This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 0.0001
37
  - train_batch_size: 8
38
  - eval_batch_size: 8
39
  - seed: 42
40
+ - gradient_accumulation_steps: 16
41
+ - total_train_batch_size: 128
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
 
46
 
47
  ### Training results
48
 
 
 
 
 
 
 
49
 
50
 
51
  ### Framework versions