LLaMA Translator
Collection
19 items
โข
Updated
โข
1
Template:
# one of 'src_lang' and 'tgt_lang' should be "ํ๊ตญ์ด"
src_lang = "English" # English, ํ๊ตญ์ด, ๆฅๆฌ่ช, ไธญๆ
tgt_lang = "ํ๊ตญ์ด" # English, ํ๊ตญ์ด, ๆฅๆฌ่ช, ไธญๆ
text = "New era, same empire. T1 is your 2024 Worlds champion!"
# task part
task_xml_dict = {
'head': "<task>",
'body': f"Translate the source sentence from {src_lang} to {tgt_lang}.\nBe sure to reflect the guidelines below when translating.",
'tail': "</task>"
}
task = f"{task_xml_dict['head']}\n{task_xml_dict['body']}\n{task_xml_dict['tail']}"
# instruction part
instruction_xml_dict = {
'head': "<instruction>",
'body': ["Translate without any condition."],
'tail': "</instruction>"
}
instruction_xml_body = '\n'.join([f'- {body}' for body in instruction_xml_dict['body']])
instruction = f"{instruction_xml_dict['head']}\n{instruction_xml_body}\n{instruction_xml_dict['tail']}"
# translation part
src_xml_dict = {
'head': f"<source><{src_lang}>",
'body': text.strip(),
'tail': f"</{src_lang}></source>"
}
tgt_xml_dict = {
'head': f"<target><{tgt_lang}>",
}
src = f"{src_xml_dict['head']}\n{src_xml_dict['body']}\n{src_xml_dict['tail']}"
tgt = f"{tgt_xml_dict['head']}\n"
translation_xml_dict = {
'head': "<translation>",
'body': f"{src}\n{tgt}",
}
translation = f"{translation_xml_dict['head']}\n{translation_xml_dict['body']}"
# final prompt
prompt = f"{task}\n\n{instruction}\n\n{translation}"
Example Input:
<task>
Translate the source sentence from English to ํ๊ตญ์ด.
Be sure to reflect the guidelines below when translating.
</task>
<instruction>
- Translate without any condition.
</instruction>
<translation>
<source><English>
New era, same empire. T1 is your 2024 Worlds champion!
</English></source>
<target><ํ๊ตญ์ด>
Expected Output:
์๋ก์ด ์๋, ์ฌ์ ํ ์์กฐ. ํฐ์์ด 2024 ์์ฆ์ ์ฑํผ์ธ์
๋๋ค!
</ํ๊ตญ์ด></target>
</translation>
Model will generate the XML end tags.
# MODEL
model_name = 'beomi/Llama-3-Open-Ko-8B'
adapter_name = 'traintogpb/llama-3-mmt-xml-it-sft-adapter'
model = AutoModelForCausalLM.from_pretrained(
model_name,
max_length=4096,
attn_implementation='flash_attention_2',
torch_dtype=torch.bfloat16,
)
model = PeftModel.from_pretrained(
model,
adapter_path=adapter_name,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(adapter_name)
tokenizer.pad_token_id = 128002 # eos_token_id and pad_token_id should be different
text = "New era, same empire. T1 is your 2024 Worlds champion!"
input_prompt = "<task> ~ <target><{tgt_lang}>" # prompt with the template above
inputs = tokenizer(input_prompt, max_length=2000, truncation=True, return_tensors='pt')
if inputs['input_ids'][0][-1] == tokenizer.eos_token_id:
inputs['input_ids'] = inputs['input_ids'][0][:-1].unsqueeze(dim=0)
inputs['attention_mask'] = inputs['attention_mask'][0][:-1].unsqueeze(dim=0)
outputs = model.generate(**inputs, max_length=2000, eos_token_id=tokenizer.eos_token_id)
input_len = len(inputs['input_ids'].squeeze())
translation = tokenizer.decode(outputs[0][input_len:], skip_special_tokens=True)
print(translation)
Base model
beomi/Llama-3-Open-Ko-8B