Edit model card

MaziyarPanahi/WizardLM-2-8x22B-GGUF

Description

MaziyarPanahi/WizardLM-2-8x22B-GGUF contains GGUF format model files for microsoft/WizardLM-2-8x22B.

How to download

You can download only the quants you need instead of cloning the entire repository as follows:

huggingface-cli download MaziyarPanahi/WizardLM-2-8x22B-GGUF --local-dir . --include '*Q2_K*gguf'

On Windows:

huggingface-cli download MaziyarPanahi/WizardLM-2-8x22B-GGUF --local-dir . --include *Q4_K_S*gguf

Load sharded model

llama_load_model_from_file will detect the number of files and will load additional tensors from the rest of files.

llama.cpp/main -m WizardLM-2-8x22B.Q2_K-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e

Prompt template

{system_prompt}
USER: {prompt}
ASSISTANT: </s>

or

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, 
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: {prompt} ASSISTANT: </s>......
Downloads last month
0
GGUF
Model size
141B params
Architecture
llama

1-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for 754geg/kpok

Quantized
(18)
this model