tokenizer.chat_template

#2
by leonardlin - opened

If anyone wants to use HF's new chat templates here's the formatting exactly matching the output docs:

tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{% if message['role'] == 'system' %}{{ message['content'] + '\n\n' }}{% elif message['role'] == 'user' %}{{'### 指ç€ș:\n' + message['content'] + '\n\n'}}{% elif message['role'] == 'assistant' %}{{'### 濜答:\n' + message['content'] + '\n\n'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '### 濜答:' }}{% endif %}"

The roles are system, user, and assistant, and you could start with the suggested prompt:

PROMPT = "仄䞋に、あるタă‚čクをèȘŹæ˜Žă™ă‚‹æŒ‡ç€șăŒă‚ă‚ŠăŸă™ă€‚ăƒȘクスă‚čăƒˆă‚’é©ćˆ‡ă«ćźŒäș†ă™ă‚‹ăŸă‚ăźć›žç­”ă‚’èš˜èż°ă—ăŠăă ă•ă„ă€‚"
chat = []
chat.append({"role": "system", "content": PROMPT})

For those looking for MT-Bench formatting, I also made a version that's close (not sure if the ADD_COLON_SINGLE adds the appropriate \n or not): https://github.com/AUGMXNT/shisa/wiki/Evals-:-JA-MT%E2%80%90Bench#swallow

Sign up or log in to comment