Text Generation
Transformers
PyTorch
English
German
mistral
conversational
Inference Endpoints
text-generation-inference

Use model for RAG application

#6
by Lue-C - opened

Hi there,

I want to use the model for a RAG application. It works very well getting the relevant information and fusing it into an answer. But there is one essential problem I encounter:
When I ask a question which is not covered by the given information, It creates an answer although I asked it not to. I tried several prompts, like

prompt_template = """
    <|im_start|>system
    Beantworte die gestellte Frage. Benutze dabei ausschließlich folgende Informationen und nicht dein internes Wissen. Nutze lediglich die Informationen, die zur Beantwortung der Frage notwendig sind und gebe die Metadaten dieser Informationen an. 
    Wenn die Informationen nicht ausreichen um die Frage zu beantworten, dann sage, dass die Informationen nicht ausreichen.
    {context}
    <|im_end|>
    <|im_start|>user
    {question}<|im_end|>
    <|im_start|>assistant
    """

So, my question is the following: how can I get the model to deny an answer if the question is not covered by the given information?

Hi,
I am also in the process of developing a RAG application for german data during my internship semester.
Can I ask what Framework and Tokenizer, Embedding modell you use?
Till now I experimented with custom models in ollama and looked a bit into langchain and haystack.

Sign up or log in to comment