4season commited on
Commit
9237182
1 Parent(s): 85de3ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -3
README.md CHANGED
@@ -1,3 +1,31 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ # 4season/pt_model_test1
8
+
9
+
10
+ # **Introduction**
11
+ This model is test version, pt model.
12
+
13
+ We utilize state-of-the-art instruction fine-tuning methods.
14
+
15
+ ## Training procedure
16
+
17
+ ### Training hyperparameters
18
+
19
+ The following hyperparameters were used during training:
20
+ - learning_rate: 1e-05
21
+ - train_batch_size: 2
22
+ - eval_batch_size: 8
23
+ - seed: 42
24
+ - distributed_type: multi-GPU
25
+ - num_devices: 8
26
+ - gradient_accumulation_steps: 8
27
+ - total_train_batch_size: 128
28
+ - total_eval_batch_size: 64
29
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
30
+ - lr_scheduler_type: cosine
31
+ - num_epochs: 1.0