File size: 1,417 Bytes
0fc40f1
 
83ddc68
 
0fc40f1
 
0b03851
0fc40f1
6c90acd
0fc40f1
6c90acd
e94ac96
6c90acd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: other
license_name: qwen
license_link: LICENSE
---

# 🦙 Qwen-72B-Llama

This is the 🦙 llamafied version of [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B).

## 🛠️ Reproduction

I used [this script](https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py) to convert the weights:

[LLaMA-Factory/tests/llamafy_qwen.py](https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py)

## 🔠 Tokenizer

After I converted the weights, I took the tokenizer from [KnutJaegersberg/Qwen-14B-Llamafied](https://huggingface.co/KnutJaegersberg/Qwen-14B-Llamafied) and uploaded it to this repository.

## 📊 Eval Scores Compared to Original Model

Here are some of the evaluation score comparisons based on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).

| Metric                | Qwen-72B      | **Qwen-72B-Llama** |
|-----------------------|---------------|--------------------|
| Avg.                  | 73.6          | **69.53**          |
| ARC (25-shot)         | 65.19         | **64.85**          |
| HellaSwag (10-shot)   | 85.94         | **83.27**          |
| MMLU (5-shot)         | 77.37         | **73.66**          |
| TruthfulQA (0-shot)   | 60.19         | **57.6**           |
| Winogrande (5-shot)   | 82.48         | **81.53**          |
| GSM8K (5-shot)        | 70.43         | **56.25**          |