Poor quality

#1
by kimihailv - opened

Hi. I tried your model with simple prompt:

<human>: Who are you?\n<bot>:

The model returned:

<s> <human>: Who are you?\n<bot>: Not Found.<|endoftext|

Is it expected behaviour?

llmware org

Thanks for your feedback. Yes, this is the expected behavior. This model was not intended for open-context chat-bot interactions. It is designed for closed-context RAG applications in which both a question and a context passage are included in the prompt.

In your example, since there is no text in the prompt to answer the question "Who are you?", the model is responding as trained, e.g., "Not Found."

If you try another 'hello world' variant of your prompt, such as ": My name is William.\n What is my name? :", then in most cases, the model should respond with "William." Please feel free to look at this test dataset (/datasets/llmware/rag_instruct_benchmark_tester) which will give a good view of sample questions and context passage, on which the model was trained.

Hope that answers your question. Please let me know if you have any other questions or feedback.

Oh, I got it. Thank you

kimihailv changed discussion status to closed

Sign up or log in to comment