Text Generation
GGUF
English
code
Edit model card

Refact-1.6B-fim-GGUF

Description

This repository contains quantized GGUF format model files for Refact-1.6B.

Prompt: fill in the middle

<fim_prefix>def print_hello_world():\n    """<fim_suffix>\n    print("Hello world!")<fim_middle>

Prompt: chat (experimental)

<empty_output>SYSTEM You are a programming assistant
<empty_output>USER How do I sort a list in Python?
<empty_output>ASSISTANT

Example llama.cpp command

./main -m refact-1_6b-Q4_K_M.gguf -c 4096 -n -1 -p '<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>'

For other parameters and how to use them, please refer to the llama.cpp documentation

Downloads last month
154
GGUF
Model size
1.59B params
Architecture
refact
Inference Examples
Unable to determine this model's library. Check the docs .

Quantized from

Datasets used to train oblivious/Refact-1.6B-fim-GGUF

Space using oblivious/Refact-1.6B-fim-GGUF 1