Edit model card

Qwen1.5-110B-Chat-GGUF

Original Model

Qwen/Qwen1.5-110B-Chat

Run with LlamaEdge

  • LlamaEdge version: v0.10.0 and above

  • Prompt template

    • Prompt type: chatml

    • Prompt string

      <|im_start|>system
      {system_message}<|im_end|>
      <|im_start|>user
      {prompt}<|im_end|>
      <|im_start|>assistant
      
  • Context size: 32000

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-110B-Chat-Q2_K.gguf \
      llama-api-server.wasm \
      --prompt-template chatml \
      --ctx-size 32000 \
      --model-name qwen1.5-110b-chat
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-110B-Chat-Q2_K.gguf \
      llama-chat.wasm \
      --prompt-template chatml \
      --ctx-size 32000
    

Quantized GGUF Models

Name Quant method Bits Size Use case
Qwen1.5-110B-Chat-Q2_K.gguf Q2_K 2 41.2 GB smallest, significant quality loss - not recommended for most purposes
Qwen1.5-110B-Chat-Q3_K_L-00001-of-00002.gguf Q3_K_L 3 32.2 GB small, substantial quality loss
Qwen1.5-110B-Chat-Q3_K_L-00002-of-00002.gguf Q3_K_L 3 26 GB small, substantial quality loss
Qwen1.5-110B-Chat-Q3_K_M-00001-of-00002.gguf Q3_K_M 3 32.2 GB very small, high quality loss
Qwen1.5-110B-Chat-Q3_K_M-00002-of-00002.gguf Q3_K_M 3 21.5 GB very small, high quality loss
Qwen1.5-110B-Chat-Q3_K_S-00001-of-00002.gguf Q3_K_S 3 32.2 GB very small, high quality loss
Qwen1.5-110B-Chat-Q3_K_S-00002-of-00002.gguf Q3_K_S 3 16.3 GB very small, high quality loss
Qwen1.5-110B-Chat-Q4_0-00001-of-00002.gguf Q4_0 4 32.1 GB legacy; small, very high quality loss - prefer using Q3_K_M
Qwen1.5-110B-Chat-Q4_0-00002-of-00002.gguf Q4_0 4 30.8 GB legacy; small, very high quality loss - prefer using Q3_K_M
Qwen1.5-110B-Chat-Q4_K_M-00001-of-00003.gguf Q4_K_M 4 32 GB medium, balanced quality - recommended
Qwen1.5-110B-Chat-Q4_K_M-00002-of-00003.gguf Q4_K_M 4 32.1 GB medium, balanced quality - recommended
Qwen1.5-110B-Chat-Q4_K_M-00003-of-00003.gguf Q4_K_M 4 3.09 GB medium, balanced quality - recommended
Qwen1.5-110B-Chat-Q4_K_S-00001-of-00002.gguf Q4_K_S 4 32.1 GB small, greater quality loss
Qwen1.5-110B-Chat-Q4_K_S-00002-of-00002.gguf Q4_K_S 4 31.3 GB small, greater quality loss
Qwen1.5-110B-Chat-Q5_0-00001-of-00003.gguf Q5_0 5 32 GB legacy; medium, balanced quality - prefer using Q4_K_M
Qwen1.5-110B-Chat-Q5_0-00002-of-00003.gguf Q5_0 5 32.1 GB legacy; medium, balanced quality - prefer using Q4_K_M
Qwen1.5-110B-Chat-Q5_0-00003-of-00003.gguf Q5_0 5 12.5 GB legacy; medium, balanced quality - prefer using Q4_K_M
Qwen1.5-110B-Chat-Q5_K_M-00001-of-00003.gguf Q5_K_M 5 32.1 GB large, very low quality loss - recommended
Qwen1.5-110B-Chat-Q5_K_M-00002-of-00003.gguf Q5_K_M 5 32 GB large, very low quality loss - recommended
Qwen1.5-110B-Chat-Q5_K_M-00003-of-00003.gguf Q5_K_M 5 14.8 GB large, very low quality loss - recommended
Qwen1.5-110B-Chat-Q5_K_S-00001-of-00003.gguf Q5_K_S 5 32 GB large, low quality loss - recommended
Qwen1.5-110B-Chat-Q5_K_S-00002-of-00003.gguf Q5_K_S 5 32.1 GB large, low quality loss - recommended
Qwen1.5-110B-Chat-Q5_K_S-00003-of-00003.gguf Q5_K_S 5 12.5 GB large, low quality loss - recommended
Qwen1.5-110B-Chat-Q6_K-00001-of-00003.gguf Q6_K 6 31.9 GB very large, extremely low quality loss
Qwen1.5-110B-Chat-Q6_K-00002-of-00003.gguf Q6_K 6 32 GB very large, extremely low quality loss
Qwen1.5-110B-Chat-Q6_K-00003-of-00003.gguf Q6_K 6 27.3 GB very large, extremely low quality loss
Qwen1.5-110B-Chat-Q8_0-00001-of-00004.gguf Q8_0 8 32.1 GB very large, extremely low quality loss - not recommended
Qwen1.5-110B-Chat-Q8_0-00002-of-00004.gguf Q8_0 8 31.9 GB very large, extremely low quality loss - not recommended
Qwen1.5-110B-Chat-Q8_0-00003-of-00004.gguf Q8_0 8 32.2 GB very large, extremely low quality loss - not recommended
Qwen1.5-110B-Chat-Q8_0-00004-of-00004.gguf Q8_0 8 22 GB very large, extremely low quality loss - not recommended
Qwen1.5-110B-Chat-f16-00001-of-00007.gguf f16 16 31.9 GB
Qwen1.5-110B-Chat-f16-00002-of-00007.gguf f16 16 32 GB
Qwen1.5-110B-Chat-f16-00003-of-00007.gguf f16 16 31.8 GB
Qwen1.5-110B-Chat-f16-00004-of-00007.gguf f16 16 31.8 GB
Qwen1.5-110B-Chat-f16-00005-of-00007.gguf f16 16 31.8 GB
Qwen1.5-110B-Chat-f16-00006-of-00007.gguf f16 16 31.5 GB
Qwen1.5-110B-Chat-f16-00007-of-00007.gguf f16 16 31.6 GB

Quantized with llama.cpp b2824.

Downloads last month
1,378
GGUF
Model size
111B params
Architecture
qwen2
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from