File size: 6,172 Bytes
1188206
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91171cd
1188206
10b2aef
 
dcdb62c
 
5e3a6ee
 
10b2aef
 
fb3997d
 
10b2aef
 
7e0b644
10b2aef
65cf63b
10b2aef
 
 
edc3022
 
 
 
 
 
10b2aef
 
 
 
 
 
 
 
 
 
944264b
10b2aef
 
 
 
 
 
 
 
 
 
 
 
 
944264b
10b2aef
 
 
3537dbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a49d51
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
 
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
4fc249a
3537dbd
 
 
4fc249a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: wtfpl
datasets:
- JosephusCheung/GuanacoDataset
- Open-Orca/OpenOrca
- stingning/ultrachat
- meta-math/MetaMathQA
- liuhaotian/LLaVA-Instruct-150K
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- fnlp/moss-003-sft-data
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- openbmb/llava_zh
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
language:
- en
- zh
pipeline_tag: text-generation
tags:
- llama
- llama2
- qwen
---
![](https://huggingface.co/JosephusCheung/tmp/resolve/main/14.17b.png)

*Image drawn by GPT-4 DALL·E 3* TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...

**Some problems with llama.cpp on tokenizer, gotta fix soon..**

# Read Me:

Also see [7B Version](https://huggingface.co/CausalLM/7B)

This model was trained based on the model weights of Qwen and LLaMA2. The training process utilized a model structure that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Relative Positional Encoding (RoPE). 

We manually curated a SFT dataset of 1.3B tokens for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.

The 7B version of the model is a distilled version of the 14B model, specifically designed for speculative sampling. Therefore, it is important to exercise caution when directly using the model, as it may produce hallucinations or unreliable outputs.

Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.

Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LM under visual instructions would enable rapid implementation of effective multimodal capabilities.

## PROMPT FORMAT:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)

**System Prompt must not be empty!**

## MMLU:
stem ACC: 64.19 

Humanities ACC: 61.40 

other ACC: 71.64 

social ACC: 75.37 

**AVERAGE ACC:67.36** (Outperforms ALL models under 70B, very close to those best 70B fine-tunes)


## CEval (Val):
STEM ACC: 66.71 

Social Science ACC: 85.10 

Humanities ACC: 76.68 

Other ACC: 70.23 

Hard ACC:54.71 

**AVERAGE ACC:73.10** (Outperforms Qwen-14B, and GPT-4)

## GSM8K

**Zero-shot ACC 0.7012888551933283** (Outperforms MetaMath-13B, Qwen-14B)


## 请读我:

另请参阅[7B版本](https://huggingface.co/CausalLM/7B)

该模型是基于Qwen和LLaMA2的模型权重进行训练的。训练过程中使用了与LLaMA2相同的模型结构,使用原始MHA LLaMA2模型的相同注意力计算方法,对相对位置编码(RoPE)没有进行额外的缩放。

我们手动筛选了一个包含13亿个标记的SFT数据集进行训练,利用了Hugging Face的开源数据集。对于大多数句子,我们进行了手动或合成改写,并使用更大的语言模型生成了其他语言版本。此外,我们还使用了精心挑选的来自维基百科的条目、来自Fandom的精选条目以及来自萌娘百科的过滤条目进行增强文本训练。为了在效率和质量之间取得平衡,训练所使用的100%数据都是合成数据,没有直接使用来自互联网或公开可用数据集的原始文本进行微调。

7B版本的模型是14B模型的精简版本,专门设计用于推测抽样。因此,在直接使用模型时,需要谨慎行事,因为它可能会产生幻觉或不可靠的输出。

请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。

额外奖励:模型在LLaVA1.5中引入的提示格式上进行了一些微调,与图像注意力计算无关。因此,将ViT投影模块与冻结的LM对齐,并根据视觉指令实施快速实现有效的多模态能力。

## 提示格式:
[chatml](https://github.com/openai/openai-python/blob/main/chatml.md)

**系统提示不能为空!**


## MMLU:
STEM准确率:64.19 

人文及艺术学科准确率:61.40 

其他学科准确率:71.64 

社会学科准确率:75.37 

**平均准确率:67.36**(超过所有70B以下的模型,非常接近最佳70B微调模型)

## CEval(验证集):
STEM准确率:66.71 

社会科学准确率:85.10 

人文学科准确率:76.68 

其他学科准确率:70.23 

困难准确率:54.71 

**平均准确率:73.10**(超过Qwen-14B和GPT-4)

## GSM8K

**零样本准确率0.7012888551933283**(超过MetaMath-13B和Qwen-14B)