Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

Langchain QA hallucinations

#18
by fathyshalab - opened

Is there any way I can try to limit the number of hallucinations away from a given context?

Currently, it tries to answer outside of the context if the information isn't in the context rather than saying it can't answer. The prompt I am using is the same as the quick_pipeline.py on your space. I'm trying to integrate it into the langchain Retrival QA over docs. What is the best way to do that. I tried reducing the topk etc. but havent gotten better results.

I would suggest you to try with "your own prompt" first, without using the Langchain. I mean, just load the model with transformers and try generating from there - with customizing the prompt as needed.

In this case, you will need to pass the documents as the context by yourself. But this will allow you to play with the prompt because langchain's default prompt doesn't work well for the open-source models.
Then, once you find the better prompt, which makes the model to answer from the context only and say 'I don't know' when the context is not given, then use that custom prompt in the langchain.

Let me know if that helps.

hi, I want to cite the dataset "fathyshalab/clinic-travel”, but you donot have citation information.

I would suggest you to try with "your own prompt" first, without using the Langchain. I mean, just load the model with transformers and try generating from there - with customizing the prompt as needed.

In this case, you will need to pass the documents as the context by yourself. But this will allow you to play with the prompt because langchain's default prompt doesn't work well for the open-source models.
Then, once you find the better prompt, which makes the model to answer from the context only and say 'I don't know' when the context is not given, then use that custom prompt in the langchain.

Let me know if that helps.

thanks tried it out, still getting some random answers but is already slightly better

fathyshalab changed discussion status to closed

Sign up or log in to comment