Prompt format

#3
by AIGUYCONTENT - opened

Hi, interested in using this model to help me secure my home server.

You mention to use the Llama-3.1 Prompt Format on the model card. Any idea where I need to add this info when using Oobabooga?

There's the "Chat Format" section:

e.g.:

{%- set ns = namespace(found=false) -%} {%- for message in messages -%} {%- if message['role'] == 'system' -%} {%- set ns.found = true -%} {%- endif -%} {%- endfor -%} {%- for message in messages -%} {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' + message['content'].rstrip() + '<|eot_id|>' -}} {%- endfor -%} {%- if add_generation_prompt -%} {{-'<|start_header_id|>assistant<|end_header_id|>\n\n'-}} {%- endif -%}

and the "Instruction Template" section:

{%- set ns = namespace(found=false) -%} {%- for message in messages -%} {%- if message['role'] == 'system' -%} {%- set ns.found = true -%} {%- endif -%} {%- endfor -%} {%- for message in messages -%} {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' + message['content'].rstrip() + '<|eot_id|>' -}} {%- endfor -%} {%- if add_generation_prompt -%} {{-'<|start_header_id|>assistant<|end_header_id|>\n\n'-}} {%- endif -%}

And would this be the code I would enter into it?:

def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda")

And the rest of the code beneath it?

WhiteRabbitNeo org

Hey there! I recommend using the new model, as it scores higher in evals. The new model uses ChatML format: https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B

migtissera changed discussion status to closed

Hi, is the new model a 7B? isn't this be better because it's a 70B?

WhiteRabbitNeo org

Yeah, the open source V2.5 model right now is a 7B. We have the V2.5-32B running on the web app.

The 70B is based on Llama-3.1, whereas the 7B is based on Qwen-2.5-Coder. We're finding that the 7B is scoring higher in evals (HumanEval for example).

Up to you!

Sign up or log in to comment