support by llama-cpp-python?

#2
by madhucharan - opened

I want to use llamacpp for starcoder model however it throws error for starcoder2, is it not supported by llamacpp python?

@madhucharan You need to check if the version of llamacpp python you are using is running a commit after they added support.

Hi, Yes i updated the version it loaded starchat2 model, now my main confusion is the prompt format for instruction. This is what I currently use for llama and mistral.

    if use_system_prompt:
        input_prompt = f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n "
    else:
        input_prompt = f"[INST] "
    for interaction in history:
        input_prompt = input_prompt + str(interaction[0]) + " [/INST] " + str(interaction[1]) + " </s><s> [INST] "

    input_prompt = input_prompt + str(message) + " [/INST] "

    output = llm(
        input_prompt,
        temperature=Env.TEMPERATURE,
        top_p=Env.TOP_P,
        top_k=Env.TOP_K,
        repeat_penalty=Env.REPEAT_PENALTY,
        max_tokens=max_tokens_input,
        stop=[
            "<|prompter|>",
            "<|endoftext|>",
            "<|endoftext|> \n",
            "ASSISTANT:",
            "USER:",
            "SYSTEM:",
        ],
        stream=True,
    )

Where to access the template for starchat model? ( I use this model specifically though - https://huggingface.co/bartowski/starchat2-15b-v0.1-GGUF)

@madhucharan This model has no template, this is only a base model. You should use dranger003/dolphincoder-starcoder2-15b-iMat.GGUF for prompting.

@dranger003 when you say this, do you mean this repo or the link I shared (https://huggingface.co/bartowski/starchat2-15b-v0.1-GGUF). Confused as these are two different ones.

Ah, I see sorry I missed that last bit. I think you may need to change your template, have a look here.

do you think it should work. Sorry Im new to this, hence there is a lil bit of confusion on understanding prompt format.

This is the prompt I got from the first gguf link(https://huggingface.co/dranger003/dolphincoder-starcoder2-15b-iMat.GGUF) you shared and used it as reference

Format reference:
<|im_start|>system
You are DolphinCoder, a helpful AI programming assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

    if use_system_prompt:
        input_prompt = f"<|im_start|> system\n{system_prompt} <|im_end|>\n"
    else:
        input_prompt = f"<|im_start|>"

    input_prompt = f"{input_prompt}user\n{str(message)}<|im_end|>\n<|im_start|>assistant "

    output = llm(
        input_prompt,
        temperature=Env.TEMPERATURE,
        top_p=Env.TOP_P,
        top_k=Env.TOP_K,
        repeat_penalty=Env.REPEAT_PENALTY,
        max_tokens=max_tokens_input,
        stop=[
        "<|im_end|>"
        ],
        stream=True,
    )

I think you'll have to try it, it looks fine but I don't use llamacpp python. Also, at the end after assistant instead of a space, use a newline character.

Sign up or log in to comment