license: gpl-3.0
language:
- en
- zh
tags:
- qwen
A Chat Model, Testing only, no performance guaranteeeee...
There is something wrong with llama.cpp GGUF format, need some time to fix that. https://github.com/ggerganov/llama.cpp/pull/4283
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization should be fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
However GGUF Quantized model is not possible now for Qwen-72B series, see https://github.com/ggerganov/llama.cpp/pull/4281
Do not use wikitext for recalibration.
Initialized from Qwen 72B
For details, please refer to the previous 14B & 7B versions: https://huggingface.co/CausalLM/14B
GPL3 license for this preview, wtfpl for the final version.
Uncensored, white-labeled... Compatible with Meta LLaMA 2.
PROMPT FORMAT: chatml
Disclaimer:
Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.