Edit model card

Cantonese Llama 2 7b v1

Model Introduction

This model has been fine-tuned on cantonese-llama-2-7b, which is a second pretrained model based on Meta's llama2 The fine-tuning process utilized a dataset consisting of OpenAssistant/oasst1(with all Simplified Chinese removed),indiejoseph/ted-transcriptions-cantonese, indiejoseph/wikipedia-zh-yue-qa, indiejoseph/wikipedia-zh-yue-summaries, indiejoseph/ted-translation-zhhk-zhcn. This fine tuned model is intended to evaluate the imapct of Simplified Chinese in the llama2 pretrained model.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("indiejoseph/cantonese-llama-2-7b-oasst-v1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("indiejoseph/cantonese-llama-2-7b-oasst-v1")

template = """A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's questions.

Human: {}

Assistant: 
"""

tokenizer.pad_token = "[PAD]"
tokenizer.padding_side = "left"

def inference(input_texts):
    inputs = tokenizer([template.format(text) for text in input_texts], return_tensors="pt", padding=True, truncation=True, max_length=512).to('cuda')

    # Generate
    generate_ids = model.generate(**inputs, max_new_tokens=512)
    outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
    outputs = [out.split('Assistant:')[1].strip() for out in outputs]
    
    return outputs


print(inference("香港現任特首係邊個?"))
# Output: 香港現任特首係李家超。

print(inference("2019年香港發生咗咩事?"))
# Output: 2019年香港發生咗反修例運動。
Downloads last month
41

Datasets used to train indiejoseph/cantonese-llama-2-7b-oasst-v1