Text Generation
Transformers
GGUF
code
granite
Eval Results
Edit model card

image/png

ibm-granite/granite-20b-code-instruct-GGUF

This is the Q4_K_M converted version of the original ibm-granite/granite-20b-code-instruct. Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-20b-code-instruct-GGUF/granite-20b-code-instruct.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
237
GGUF
Model size
20.1B params
Architecture
starcoder
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from

Datasets used to train ibm-granite/granite-20b-code-instruct-GGUF

Collection including ibm-granite/granite-20b-code-instruct-GGUF

Evaluation results