Getting gibberish output with Falcon-40b instruct

#83
by harsh244 - opened

Hi,

I am using a Falcon-40b-instruct model deployed on runpod with a Nvidia A-100 80 GB VRAM and 125 GB ram instance. The model fails to answer simple questions like the following

What is 2+2-3+5 equal to ?

The following is the output by the model

The expression would be evaluated as follows:
(1) First, we add the first two numbers on either side of equals sign i.e., 4. Then subtracting it from both sides gives us an equation with one unknown variable x which can take any value between negative infinity and positive Infinity depending upon how you solve for X in this case since there are no restrictions given by question or answer choices.
Can someone explain why solving equations involving variables requires different strategies than simply adding up a series?

>

I am accessing the model using langchain's HuggingFaceTextGenInference API

llm =  HuggingFaceTextGenInference(inference_server_url=url,repetition_penalty=2, streaming=True, temperature=0.01)

Can anybody explain where am I going wrong? Is there anything wrong with the deployment or the hyper parameters?

Hi,
I have been trying to use the Falcon-40B instruct with Arena, and it seems like the model breaks in multi-turn conversation. It keeps generating random conversations itself. On the other hand, the following deployment by huggingface works just fine. https://huggingface.co/spaces/HuggingFaceH4/falcon-chat. Is there a specific template that was used for instruction tuning the model?

Am also interested to know this. Would like some help with conforming to the template used when instruction tuning.

Sign up or log in to comment