How to setup model parameters and prompt format in HF Chat-UI?

#3
by PaulForsterNewman - opened

Hi there,

first of all, I would like to thank the team for providing a german LLM! I am almost sure that this is a "noob question" but since I am absolutely new to the subject I ask it anyway. I successfully followed this manual to deploy Llama-3-SauerkrautLM-8b-Instruct on an EC2 instance, using docker and huggingface Chat-UI and a mongo database.

https://billtcheng2013.medium.com/huggingface-chat-ui-your-own-chatgpt-part-1-a5eee6b6614c

However, when I querry Llama-3-SauerkrautLM-8b-Instruck, it just sends greetings and asks me how it can assist (in German). Again and again, irrespective of what I ask it.

I think it has do to with the config of chat-UI. I needed to customize the file .env.local with model specs.

grafik.png

Is there a way to find information on how to change the config (model parameters and prompt format) for Llama-3-SauerkrautLM-8b-Instruct? I changed the model name and replaced the parameter "preprompt" with the prompt template from the model card but this seems to be wrong.

Any help would be appreciated!

[INST][/INST] does not work well with this model. You should use the same settings as for all Llama3 models. Like this (for SyllyTavern-Instruct):

"input_sequence": "<|start_header_id|>user({{user}})<|end_header_id|>\n\n",
"output_sequence": "<|start_header_id|>assistant({{char}})<|end_header_id|>\n\n",
"first_output_sequence": "",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "<|eot_id|>",
DavidGF changed discussion status to closed

Sign up or log in to comment