How to try Flacon in HuggingChat?

#6
by promptgai - opened

Steps to try flacon using HuggingChat

If you can change the name of the model_id to tiiuae/falcon-40b-instruct and load it in text-generation-inference you can have it in HuggingChat locally

So I successfully managed to use falcon-40b-instruct in text-generation-inference and connect it to HuggingChat. However, I am not sure what should be the following tokens (unlike OpenAssistant):

 "userMessageToken": "<|prompter|>",
 "assistantMessageToken": "<|assistant|>",
 "messageEndToken": "<|endoftext|>",
Technology Innovation Institute org

We don't have any such tokens set in this instruct version, so you could set the user to "User:", and the assistant to "Assistant:" (if it supports plain text).

Hello, I have falcon-40b-instruct running with text-generation-inference and Huggingchat on a local machine, has anyone else managed to do so? I am not sure of the three tokens listed above and the preprompt, currently the model seems to continue the conversation from the perspective of the user after answering the initial prompt. Any ideas?

I am having the same issue with Falcon models in general. The special tokens and how/where they should be used in the various prompts are mostly hit and miss for me and I cannot get this thing to work nicely with external input/context.

Sign up or log in to comment