File size: 1,816 Bytes
5e8f91c
 
9e3c9ec
 
 
 
 
5e8f91c
c5fb392
ac6ce97
6e8fcba
dbeb915
3955950
 
6e8fcba
 
3955950
 
b3ffc8e
 
5e8f91c
 
c5fb392
5e8f91c
 
 
6e8c235
5e8f91c
2941aff
 
 
b3ffc8e
2941aff
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: gpl-3.0
language:
- en
- zh
tags:
- qwen
---
# A Chat Model, Testing only, no performance guaranteeeee...

*There is something wrong with llama.cpp GGUF format, need some time to fix that. [https://github.com/ggerganov/llama.cpp/pull/4283](https://github.com/ggerganov/llama.cpp/pull/4283)*

Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization should be fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.

However GGUF Quantized model is not possible now for Qwen-72B series, see [https://github.com/ggerganov/llama.cpp/pull/4281](https://github.com/ggerganov/llama.cpp/pull/4281)

*Do not use wikitext for recalibration.*

Initialized from Qwen 72B

For details, please refer to the previous 14B & 7B versions: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)



**GPL3 license for this preview**, wtfpl for the final version.

# Uncensored, white-labeled... Compatible with Meta LLaMA 2.

PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)



Disclaimer:

Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.