How to setup parameters?

#1
by ChrisTorng - opened

I've tried Taiwan LLM ChatUI, it's result is good.

I am using Text generation web UI with audreyt/Taiwan-LLM-7B-v2.0.1-chat-GGUF Q2 locally, with 3070 8GB, but can't get well response. Sometimes it runs for a very long time to generate the first token (about several mins), but later fast. Sometimes it generate empty response.

I'm new to LLMs. I think the problem is not mainly on Q2, should be on the basic parameters. I need some help for the proper settings.
Does the following right?

Model:
Model loader: ctransformers
n-gpu-layers: 128 (max)
n_ctx: 4096 (default)
threads: 4 (physical CPU cores)
n_batch: 512 (default)
model_type: llama
no-mmap/mlock: checked

Instruction template: Vicuna-v1.1
User string: USER: (default)
Bot string: ASSISTANT: (default)
Context: (my objective in Traditional Chinese)
Turn template: <|user|> <|user-message|>\n<|bot|> <|bot-message|>\n (default)
Command for chat-instruct mode: (default)
Continue the chat dialogue below. Write a single reply for the character "<|character|>".

<|prompt|>

Chat:
Mode: instruct (not chat/chat-instruct)

Others keep as default.

Thanks for your kindly help.

The default prompt template for Taiwan LLM v2 is

你是人工智慧助理,以下是用戶和人工智能助理之間的對話。你要對用戶的問題提供有用、安全、詳細和禮貌的回答。USER: {user} ASSISTANT:

It's still in the format of vicuna but with my own system instruction. v2 models are trained to be following different system instructions so you can adjust it to your use cases or even just skip it.

from transformers import AutoTokenizer
chat = [

{"role": "system", "content": "你講中文"},

{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
tokenizer = AutoTokenizer.from_pretrained("yentinglin/Taiwan-LLM-7B-v2.0.1-chat")
prompt_for_completed_conversation = tokenizer.apply_chat_template(chat, tokenize=False)
prompt_for_generation = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

Do not need "end of sentence" symbol?

Sign up or log in to comment