image/png

ibm-granite/granite-8b-code-instruct-4k-GGUF

This is the Q4_K_M converted version of the original ibm-granite/granite-8b-code-instruct-4k. Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-8b-code-instruct-4k-GGUF/granite-8b-code-instruct.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
410
GGUF
Model size
8.05B params
Architecture
llama

4-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for ibm-granite/granite-8b-code-instruct-4k-GGUF

Quantized
(22)
this model

Datasets used to train ibm-granite/granite-8b-code-instruct-4k-GGUF

Collection including ibm-granite/granite-8b-code-instruct-4k-GGUF

Evaluation results