why is prompt format different than original model.

#3
by sparsh35 - opened

As shown in original model page https://huggingface.co/Nexusflow/Starling-LM-7B-beta
prompt format is
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"

SolidRusT Networks org

The prompt format should be observed from the tokenizer chat template: https://huggingface.co/Nexusflow/Starling-LM-7B-beta/blob/main/tokenizer_config.json#L51

Also, It seems this model does not use a system prompt.

SolidRusT Networks org

I think you are 100% correct, also seeing here:

# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)

## Multi-turn conversation
prompt = "Hello"
follow_up_question =  "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)

### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
SolidRusT Networks org

I think I need to refactor the model card to use the "Multi-turn conversation" template, so that it can be compared the same way as the other 7B AWQ models.

Sign up or log in to comment