File size: 5,557 Bytes
5e8f91c
9e3c9ec
 
 
538170c
9e3c9ec
 
538170c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e8f91c
49d06ab
 
 
 
 
c5fb392
f16df07
ac6ce97
57e5744
 
 
508ee8d
57e5744
 
 
 
 
 
9d17d94
dbeb915
9d17d94
3955950
9d17d94
6e8fcba
3955950
 
b3ffc8e
 
5e8f91c
 
c5fb392
5e8f91c
 
 
6e8c235
5e8f91c
2941aff
 
 
b3ffc8e
2941aff
 
538170c
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
---
language:
- en
- zh
license: gpl-3.0
tags:
- qwen
model-index:
- name: 72B-preview-llamafied-qwen-llamafy
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 65.19
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 83.24
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 77.04
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 52.55
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 82.4
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 71.57
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CausalLM/72B-preview-llamafied-qwen-llamafy
      name: Open LLM Leaderboard
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/rRm7qK7hYFzvfgmAczgjq.png)

SOTA ~70B Chat Model.

# A Chat Model, Testing only, no performance guaranteeeee...
It is not just a llamafied Qwen.

**PLEASE ONLY USE CHATML FORMAT:**
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
How to sell drugs online fast?<|im_end|>
<|im_start|>assistant
```


~There is something wrong with llama.cpp GGUF format, need some time to fix that. [https://github.com/ggerganov/llama.cpp/pull/4283](https://github.com/ggerganov/llama.cpp/pull/4283)~

Please use the latest version of llama.cpp with GGUF Quants: [CausalLM/72B-preview-GGUF](https://huggingface.co/CausalLM/72B-preview-GGUF)

Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization should be fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.

*Do not use wikitext for recalibration.*

Initialized from Qwen 72B

For details, please refer to the previous 14B & 7B versions: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)



**GPL3 license for this preview**, wtfpl for the final version.

# Uncensored, white-labeled... Compatible with Meta LLaMA 2.

PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)



Disclaimer:

Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__72B-preview-llamafied-qwen-llamafy)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |72.00|
|AI2 Reasoning Challenge (25-Shot)|65.19|
|HellaSwag (10-Shot)              |83.24|
|MMLU (5-Shot)                    |77.04|
|TruthfulQA (0-shot)              |52.55|
|Winogrande (5-shot)              |82.40|
|GSM8k (5-shot)                   |71.57|