Mingyuyang-1 commited on
Commit
934957b
·
1 Parent(s): 8ca62e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -3,6 +3,9 @@ base_model:
3
  - meta-llama/Llama-3.2-3B-Instruct
4
  datasets:
5
  - JunxiongWang/sftdatasetv3
 
 
 
6
  model-index:
7
  - name: X-EcoMLA-3B3B-fixed-kv816-DPO
8
  results: []
@@ -35,6 +38,16 @@ The X-EcoMLA models are not trained from scratch. Instead, they are composed fro
35
  | 3. SFT | End-to-End Knowledge Distillation | The initialized model is fine-tuned via knowledge distillation. |
36
  | 4. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
37
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ## Getting Started
40
 
 
3
  - meta-llama/Llama-3.2-3B-Instruct
4
  datasets:
5
  - JunxiongWang/sftdatasetv3
6
+ - HuggingFaceH4/ultrafeedback_binarized
7
+ - HuggingFaceH4/orca_dpo_pairs
8
+ - JunxiongWang/llama3-ultrafeedback-armorm
9
  model-index:
10
  - name: X-EcoMLA-3B3B-fixed-kv816-DPO
11
  results: []
 
38
  | 3. SFT | End-to-End Knowledge Distillation | The initialized model is fine-tuned via knowledge distillation. |
39
  | 4. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
40
 
41
+ ## Training Data
42
+
43
+ |Stage | Dataset | License |
44
+ |-----------|---------------------------------------------------------------------------|------------------------|
45
+ | SFT | https://huggingface.co/datasets/teknium/OpenHermes-2.5 | Refer source materials |
46
+ | SFT | https://huggingface.co/datasets/tomg-group-umd/GenQA | CC BY-NC 4.0 |
47
+ | SFT | https://huggingface.co/datasets/BAAI/Infinity-Instruct | CC BY-SA 4.0 |
48
+ | DPO | https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized | MIT |
49
+ | DPO | https://huggingface.co/datasets/HuggingFaceH4/orca_dpo_pairs | MIT |
50
+ | DPO | https://huggingface.co/datasets/JunxiongWang/llama3-ultrafeedback-armorm | MIT |
51
 
52
  ## Getting Started
53