Edit model card

Taigi-llama-logo

Model Card for Taigi-Llama-2-Translator-7B

The Taigi-Llama-2-Translator series are built based on the Taigi-Llama-2 series model. We conducted fine-tuning on 263k parallel data to create a translation model for Taiwanese Hokkien and related languages.

For more details, please refer to our GitHub repository and the paper: Enhancing Taiwanese Hokkien Dual Translation by Exploring and Standardizing of Four Writing Systems

Explore other models and datasets in the Taiwanese Hokkien LLM collection.

Model description

  • Base Model: Bohanlu/Taigi-Llama-2-7B
  • Usage: This model can be used for translating between Traditional Chinese or English and Taiwanese Hokkien (Hanzi, POJ, or Hanlo). It also supports translation between different scripts of Taiwanese Hokkien (Hanzi, POJ, Hanlo).
  • Language(s) (NLP): Taiwanese Hokkien (Hanzi, POJ and Hanlo), Traditional Chinese and English
  • Input: Text in source language
  • Output: Text in target language
  • Model Size: 7B parameters

Prompt Template

{BOS}[TRANS]\n{source_sentence}\n[/TRANS]\n[{target_language}]\n
  • source_sentence: The sentence you want to translate.
  • target_language: The target language you want to translate to. Use "ZH" for Traditional Chinese, "EN" for English, "POJ" for Taiwanese Hokkien POJ, "HL" for Taiwanese Hokkien Hanlo, and "HAN" for Taiwanese Hokkien Hanzi.
  • Ensure there's a newline at the end.

Usage Example

from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
import torch
import accelerate

def get_pipeline(path:str, tokenizer:AutoTokenizer, accelerator:accelerate.Accelerator) -> TextGenerationPipeline:
    model = AutoModelForCausalLM.from_pretrained(
        path, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True)
    
    terminators = [tokenizer.eos_token_id, tokenizer.pad_token_id]

    pipeline = TextGenerationPipeline(model = model, tokenizer = tokenizer, num_workers=accelerator.state.num_processes*4, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators)

    return pipeline

model_dir = "Bohanlu/Taigi-Llama-2-Translator-7B" # or "Bohanlu/Taigi-Llama-2-Translator-13B" for the 13B model
tokenizer = AutoTokenizer.from_pretrained(model_dir, use_fast=False)

accelerator = accelerate.Accelerator()
pipe = get_pipeline(model_dir, tokenizer, accelerator)

PROMPT_TEMPLATE = "[TRANS]\n{source_sentence}\n[/TRANS]\n[{target_language}]\n"

def translate(source_sentence:str, target_language:str) -> str:
    prompt = PROMPT_TEMPLATE.format(source_sentence=source_sentence, target_language=target_language)
    out = pipe(prompt, return_full_text=False, repetition_penalty=1.1, do_sample=False)[0]['generated_text']
    return out[:out.find("[/")].strip()

source_sentence = "How are you today?"

print("To Hanzi: " + translate(source_sentence, "HAN"))
# Output: To Hanzi: 你今仔日好無?

print("To POJ: " + translate(source_sentence, "POJ"))
# Output: To POJ: Lí kin-á-ji̍t án-chóaⁿ?

print("To Traditional Chinese: " + translate(source_sentence, "ZH"))
# Output: To Traditional Chinese: 你今天好嗎?

print("To Hanlo: " + translate(source_sentence, "HL"))
# Output: To Hanlo: 你今仔日好無?

Citation

If you find the resources in the Taiwanese Hokkien LLM collection useful in your work, please cite it using the following reference:

@misc{lu2024enhancing,
      title={Enhancing Taiwanese Hokkien Dual Translation by Exploring and Standardizing of Four Writing Systems}, 
      author={Bo-Han Lu and Yi-Hsuan Lin and En-Shiun Annie Lee and Richard Tzong-Han Tsai},
      year={2024},
      eprint={2403.12024},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
59
Safetensors
Model size
6.94B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including Bohanlu/Taigi-Llama-2-Translator-7B