Spaces:
Paused
Question about Input Data Processing for Chatbot
I wish to understand whether the questions or commands I input into the chatbot are directly presented to the AI as I type them, or if there is any form of preprocessing, modification, or addition of prompt engineering involved before the AI receives and processes them.
Understanding this aspect is crucial for users like myself to gauge the directness and authenticity of the interaction with the AI.
Thanks
Hi, if you want to check what prompt building is being used for HuggingChat, feel free to refer to the codebase on GitHub which has everything you need. Specifically the prod config is here
There's also a download button next to user messages in chats, that let you download the full raw prompt being used.
Hopefully that answers your question! let me know otherwise
Hi, if you want to check what prompt building is being used for HuggingChat, feel free to refer to the codebase on GitHub which has everything you need. Specifically the prod config is here
There's also a download button next to user messages in chats, that let you download the full raw prompt being used.
Hopefully that answers your question! let me know otherwise
Thanks a lot for your answer! I was just wondering if deploying models like llama2 directly would work as well as in HuggingFace chat.
It showed me exactly how prompts are passed. It's been super useful. Keep up the great work!
Happy to hear it helped!