postitive666 commited on
Commit
cb68d11
1 Parent(s): 3c7b14a

orpo chinese phi3 4K

Browse files
Files changed (1) hide show
  1. README.md +1 -60
README.md CHANGED
@@ -1,62 +1,3 @@
1
  ---
2
- license: other
3
- base_model: /data/user/chengrui/project/mergekit/Phi-3-mini-128k-instruct
4
- tags:
5
- - llama-factory
6
- - full
7
- - generated_from_trainer
8
- model-index:
9
- - name: phi3-chinese-orpo
10
- results: []
11
  ---
12
-
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
- # phi3-chinese-orpo
17
-
18
- This model is a fine-tuned version of [/data/user/chengrui/project/mergekit/Phi-3-mini-128k-instruct](https://huggingface.co//data/user/chengrui/project/mergekit/Phi-3-mini-128k-instruct) on the dpo_mix_en and the dpo_mix_zh datasets.
19
-
20
- ## Model description
21
-
22
- More information needed
23
-
24
- ## Intended uses & limitations
25
-
26
- More information needed
27
-
28
- ## Training and evaluation data
29
-
30
- More information needed
31
-
32
- ## Training procedure
33
-
34
- ### Training hyperparameters
35
-
36
- The following hyperparameters were used during training:
37
- - learning_rate: 5e-06
38
- - train_batch_size: 1
39
- - eval_batch_size: 1
40
- - seed: 42
41
- - distributed_type: multi-GPU
42
- - num_devices: 6
43
- - gradient_accumulation_steps: 8
44
- - total_train_batch_size: 48
45
- - total_eval_batch_size: 6
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: cosine
48
- - lr_scheduler_warmup_ratio: 0.1
49
- - lr_scheduler_warmup_steps: 20
50
- - num_epochs: 3.0
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Training results
54
-
55
-
56
-
57
- ### Framework versions
58
-
59
- - Transformers 4.40.0
60
- - Pytorch 2.1.0+cu121
61
- - Datasets 2.15.0
62
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: mit
 
 
 
 
 
 
 
 
3
  ---