kaejo98 commited on
Commit
ba60400
1 Parent(s): dc51d72

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -20,8 +20,7 @@ More information needed
20
 
21
  ## Intended uses & limitations
22
 
23
- The model takes context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format: \<generate_questions\> paragraph: context text here'
24
-
25
 
26
  ## Training and evaluation data
27
 
@@ -32,13 +31,13 @@ More information needed
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
- - learning_rate: 5e-05
36
  - train_batch_size: 16
37
- - eval_batch_size: 8
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: cosine
41
- - lr_scheduler_warmup_ratio: 0.35
42
  - num_epochs: 5
43
 
44
  ### Framework versions
 
20
 
21
  ## Intended uses & limitations
22
 
23
+ More information needed
 
24
 
25
  ## Training and evaluation data
26
 
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
+ - learning_rate: 3e-05
35
  - train_batch_size: 16
36
+ - eval_batch_size: 16
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: cosine
40
+ - lr_scheduler_warmup_ratio: 0.25
41
  - num_epochs: 5
42
 
43
  ### Framework versions