Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints

Prompt format contradiction

#5
by 2EyeGuy - opened

You give two examples of the following prompt format:
### Instruction: <question>\n\n### Assistant: <answer>\n\n

But your hugging face space uses this version of the Alpaca prompt format:
### Instruction: \n<question>\n\n### Response:\n<answer>\n\n

And this version of the Vicuna 1.1 prompt format:
USER: <question>\nASSISTANT: <answer>\n

But reading the axolotl source code and looking at your config file, shows that you trained two of the models with a different version of the Vicuna 1.1 instruction format, and one of the models with the Alpaca instruction format. Which is a weird way of doing it.

Two of the datasets:
USER: <question> ASSISTANT: <answer></s>

The other dataset:
Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n<question>\n\n### Response:\n<answer>

So what format should I actually use?

EDIT: fixed my mistake about the hugging face space

Open Access AI Collective org

My goal is to give the model enough examples that it is capable of handling any prompt format and could infer the meaning.

have you figured it out?

Sign up or log in to comment