amd
/

Safetensors
llama
alignment-handbook
Generated from Trainer
Mingyuyang-1 commited on
Commit
29d3614
·
1 Parent(s): 0e1add4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -43,6 +43,17 @@ The Zebra-Llama models are not trained from scratch. Instead, they are composed
43
  | 5. SFT | End-to-End Knowledge Distillation | The composed hybrid model is fine-tuned via knowledge distillation, using an 8B model as a teacher to transfer rich, pre-trained knowledge. |
44
  | 6. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
45
 
 
 
 
 
 
 
 
 
 
 
 
46
  ## Getting Started
47
 
48
  ### Installation
 
43
  | 5. SFT | End-to-End Knowledge Distillation | The composed hybrid model is fine-tuned via knowledge distillation, using an 8B model as a teacher to transfer rich, pre-trained knowledge. |
44
  | 6. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
45
 
46
+ ## Training Data
47
+
48
+ |Stage | Dataset | License |
49
+ |-----------|---------------------------------------------------------------------------|------------------------|
50
+ | ILD/SFT | https://huggingface.co/datasets/teknium/OpenHermes-2.5 | Refer source materials |
51
+ | ILD/SFT | https://huggingface.co/datasets/tomg-group-umd/GenQA | CC BY-NC 4.0 |
52
+ | ILD/SFT | https://huggingface.co/datasets/BAAI/Infinity-Instruct | CC BY-SA 4.0 |
53
+ | DPO | https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized | MIT |
54
+ | DPO | https://huggingface.co/datasets/HuggingFaceH4/orca_dpo_pairs | MIT |
55
+ | DPO | https://huggingface.co/datasets/JunxiongWang/llama3-ultrafeedback-armorm | MIT |
56
+
57
  ## Getting Started
58
 
59
  ### Installation