When there is a more flexible prompt, the model tends not to output

#5
by syGOAT - opened

The model seems to be particularly sensitive to prompts. When I use "Translate this from English to Chinese:\nEnglish: " + user_prompt + "\nChinese:" or "Translate this from Chinese to English:\nChinese: " + user_prompt + "\nEnglish:", it can output the right answer.

from openai import OpenAI


openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8080/v1"

client = OpenAI(
      api_key=openai_api_key,
      base_url=openai_api_base,
)


user_prompt = '''
When there is a more flexible prompt, the model tends not to output. 
'''

prompt = "Translate this from English to Chinese:\nEnglish: " + user_prompt + "\nChinese:"

chat_response = client.completions.create(
      model='ALMA', 
      prompt=prompt, 
      temperature=0.7, 
      top_p=0.95, 
      stream=True, 
      max_tokens=2048
)

for data in chat_response:
    data = data.choices[0].text
    print(data, end='', flush=True)
print()

But if I change the prompt more flexible, like "Translate this from English to Chinese or Chinese to English:\nOriginal text: " + user_prompt + "\nResult:", it will putput nothing.
Is this a limitation? Can the translation task be completed based on a more flexible prompt?

Thanks for the interest in our work! Yes, the model should be sensitive to the given translation prompt because the prompt the ALMA models used for training is:

Translate this from <source langauge> to <target language>:
<source language>: <source sentence>
<target language>:

For optimal results during inference, it is recommended that users utilize the fixed prompt as provided above.

Sign up or log in to comment