Edit model card

CodeQwen1.5-7B-Chat-GGUF

Model Description

CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.

  • Strong code generation capabilities and competitve performance across a series of benchmarks;
  • Supporting long context understanding and generation with the context length of 64K tokens;
  • Supporting 92 coding languages
  • Excellent performance in text-to-SQL, bug fix, etc.

For more details, please refer to Qwen blog post and GitHub repo.

Requirements

The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install transformers>=4.37.0, or you might encounter the following error:

KeyError: 'qwen2'.

Tips

  • If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in generation_config.json.

`

Downloads last month
566
GGUF
Model size
7.25B params
Architecture
llama
Inference Examples
Unable to determine this model's library. Check the docs .

Quantized from