JunxiongWang commited on
Commit
c345451
1 Parent(s): 7ae6913

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -59
README.md CHANGED
@@ -1,62 +1,27 @@
1
  ---
2
- base_model: Llama-Mamba2-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0
3
- tags:
4
- - alignment-handbook
5
- - generated_from_trainer
6
- datasets:
7
- - HuggingFaceH4/ultrafeedback_binarized
8
- - HuggingFaceH4/orca_dpo_pairs
9
- - JunxiongWang/llama3-ultrafeedback-armorm
10
- model-index:
11
- - name: Llama-Mamba2-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0-dpo-short
12
- results: []
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
-
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/osieosie/huggingface/runs/1ov6efjv)
19
- # Llama-Mamba2-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0-dpo-short
20
-
21
- This model is a fine-tuned version of [Llama-Mamba2-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0](https://huggingface.co/Llama-Mamba2-3.1-8B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0) on the HuggingFaceH4/ultrafeedback_binarized, the HuggingFaceH4/orca_dpo_pairs and the JunxiongWang/llama3-ultrafeedback-armorm datasets.
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
39
- The following hyperparameters were used during training:
40
- - learning_rate: 5e-07
41
- - train_batch_size: 4
42
- - eval_batch_size: 4
43
- - seed: 42
44
- - distributed_type: multi-GPU
45
- - num_devices: 8
46
- - total_train_batch_size: 32
47
- - total_eval_batch_size: 32
48
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: cosine
50
- - lr_scheduler_warmup_ratio: 0.1
51
- - num_epochs: 1
52
-
53
- ### Training results
54
-
55
-
56
-
57
- ### Framework versions
58
-
59
- - Transformers 4.43.1
60
- - Pytorch 2.1.1+cu118
61
- - Datasets 2.20.0
62
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
+ Zero-shot results when using the [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) as the teacher model, and the [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) as the initialized model
6
+
7
+ | Task | Llama-3.1-8B-Instruct | Llama3.1-Mamba-8B-distill | Llama3.1-Mamba-8B-dpo | Llama3.1-Mamba2-8B-distill | Llama3.1-Mamba2-8B-dpo |
8
+ |---------------------|-----------------------|--------------------------|-----------------------|---------------------------|-----------------------|
9
+ | arc_challenge | 0.552 | 0.5384 | 0.5657 | 0.5265 | 0.5973 |
10
+ | arc_easy | 0.8178 | 0.8224 | 0.8401 | 0.822 | 0.8481 |
11
+ | hellaswag | 0.7921 | 0.7591 | 0.7736 | 0.7536 | 0.7969 |
12
+ | mmlu (0 shot) | 0.6812 | 0.6213 | 0.636 | 0.6101 | 0.5974 |
13
+ | openbookqa | 0.432 | 0.428 | 0.442 | 0.416 | 0.44 |
14
+ | piqa | 0.8079 | 0.7933 | 0.8041 | 0.7889 | 0.8003 |
15
+ | pubmedqa | 0.752 | 0.72 | 0.744 | 0.726 | 0.746 |
16
+ | race | 0.4478 | 0.4211 | 0.4344 | 0.4211 | 0.4612 |
17
+ | winogrande | 0.7388 | 0.7277 | 0.738 | 0.7174 | 0.7411 |
18
+ | truthful | 0.4267 | 0.4002 | 0.4607 | 0.4031 | 0.5022 |
19
+
20
+ ```
21
+ @article{junxiongdaniele2024mambainllama,
22
+ title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models},
23
+ author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao},
24
+ journal = {arXiv preprint arXiv:2408.15237},
25
+ year = {2024}
26
+ }
27
+ ```