Edit model card

image/png

ibm-granite/granite-3b-code-instruct-2k-GGUF

This is the Q4_K_M converted version of the original ibm-granite/granite-3b-code-instruct-2k. Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-3b-code-instruct-2k-GGUF/granite-3b-code-instruct.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
1,579
GGUF
Model size
3.48B params
Architecture
llama

4-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for ibm-granite/granite-3b-code-instruct-2k-GGUF

Quantized
(13)
this model

Datasets used to train ibm-granite/granite-3b-code-instruct-2k-GGUF

Collection including ibm-granite/granite-3b-code-instruct-2k-GGUF

Evaluation results