It seems that this model sometimes ignores user instruction

#12
by jlzhou - opened

I accidentally find that this model ignores at least part of my system message. To validate this, I serve this model using TGI, and made the following demo using langchain and langchain-openai.

setup:

llm = ChatOpenAI(
    openai_api_base=openai_api_base,
    openai_api_key="nothing",
    model="someone",
    max_tokens=512,
    temperature=0.9,
    model_kwargs={
        "top_p": 0.3,
    },
)

a simple system message:

messages = [
    ("system", "You are Rei, an AI assistant developed by FooBar."),
    ("user", "tell me about yourself")
]
print(llm.invoke(messages).content)

and the response from model:

I'm sorry for the confusion, but as an AI developed by Deepseek, I don't have personal experiences or emotions, and I don't have a personal identity. I'm designed to assist with computer science-related inquiries. If you have any questions related to programming, algorithms, data structures, or similar topics, feel free to ask!

After removed the default system instruction in chat_template it works now. So I suspect there's something wrong with the chat_template, causing the default system instruction always been set.

This is the original template:

{%- set found_item = false -%}
{%- for message in messages -%}
    {%- if message['role'] == 'system' -%}
        {%- set found_item = true -%}
    {%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{'You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\\n'}}
{%- endif %}
{%- for message in messages %}
    {%- if message['role'] == 'system' %}
{{ message['content'] }}
    {%- else %}
        {%- if message['role'] == 'user' %}
{{'### Instruction:\\n' + message['content'] + '\\n'}}
        {%- else %}
{{'### Response:\\n' + message['content'] + '\\n<|EOT|>\\n'}}
        {%- endif %}
    {%- endif %}
{%- endfor %}
{{'### Response:\\n'}}

And this is the one I'm using:

{%- for message in messages %}
    {%- if message['role'] == 'system' %}
{{ message['content'] }}
    {%- else %}
        {%- if message['role'] == 'user' %}
{{'### Instruction:\\n' + message['content'] + '\\n'}}
        {%- else %}
{{'### Response:\\n' + message['content'] + '\\n<|EOT|>\\n'}}
        {%- endif %}
    {%- endif %}
{%- endfor %}
{{'### Response:\\n'}}
Multimodal Art Projection org
edited Mar 28

I accidentally find that this model ignores at least part of my system message. To validate this, I serve this model using TGI, and made the following demo using langchain and langchain-openai.

setup:

llm = ChatOpenAI(
    openai_api_base=openai_api_base,
    openai_api_key="nothing",
    model="someone",
    max_tokens=512,
    temperature=0.9,
    model_kwargs={
        "top_p": 0.3,
    },
)

a simple system message:

messages = [
    ("system", "You are Rei, an AI assistant developed by FooBar."),
    ("user", "tell me about yourself")
]
print(llm.invoke(messages).content)

and the response from model:

I'm sorry for the confusion, but as an AI developed by Deepseek, I don't have personal experiences or emotions, and I don't have a personal identity. I'm designed to assist with computer science-related inquiries. If you have any questions related to programming, algorithms, data structures, or similar topics, feel free to ask!

Due to our focus on domain-specific enhancements, particularly in coding and programming areas, it's important to note that especially after applying SFT to smaller models, their general capabilities in general domains are significantly reduced.

I accidentally find that this model ignores at least part of my system message. To validate this, I serve this model using TGI, and made the following demo using langchain and langchain-openai.

setup:

llm = ChatOpenAI(
    openai_api_base=openai_api_base,
    openai_api_key="nothing",
    model="someone",
    max_tokens=512,
    temperature=0.9,
    model_kwargs={
        "top_p": 0.3,
    },
)

a simple system message:

messages = [
    ("system", "You are Rei, an AI assistant developed by FooBar."),
    ("user", "tell me about yourself")
]
print(llm.invoke(messages).content)

and the response from model:

I'm sorry for the confusion, but as an AI developed by Deepseek, I don't have personal experiences or emotions, and I don't have a personal identity. I'm designed to assist with computer science-related inquiries. If you have any questions related to programming, algorithms, data structures, or similar topics, feel free to ask!

Due to our focus on domain-specific enhancements, particularly in coding and programming areas, it's important to note that especially after applying SFT to smaller models, their general capabilities in general domains are significantly reduced.

I think it's not the model itself, I thinks there's some problem in the chat_template, but I'm not familiar with jinja so I'm not 100% sure.

There's a section in chat_template that attempts to inject a system message if the user does not specify one.

However, if I remove this section LLM starts to follow my instruction and admitting he is 'Rei'

Sign up or log in to comment