Taiwan-LLM_v3_tokenizer

This repository contains a custom tokenizer for the Taiwan-LLM v3 model, which is a Traditional Mandarin language model based on the LLaMA architecture. The tokenizer is created by merging a Mandarin SentencePiece model with the original LLaMA tokenizer, resulting in a vocabulary size of 64,000 tokens.

Features

  • Supports both English and Traditional Mandarin text tokenization
  • Includes special tokens <|im_start|> and <|im_end|>
  • Vocabulary size of 64,000 tokens
  • Compatible with the LLaMA/Mistral model architecture

Usage

To use the Taiwan-LLM_v3_tokenizer in your project, you can install it using the following command:

pip install transformers

Then, you can load the tokenizer using the Hugging Face LlamaTokenizer class:

from transformers import LlamaTokenizer

taiwan_llm_tokenizer = LlamaTokenizer.from_pretrained("yentinglin/Taiwan-LLM_v3_tokenizer")
original_llama_tokenizer = LlamaTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf")

Once the tokenizer is loaded, you can use it to tokenize both English and Traditional Mandarin text:

text_en = """During the recent GTC (GPU Technology Conference), Nvidia CEO Jensen Huang took time out of his busy schedule to dine with the Taiwanese community in Silicon Valley. In his speech at the gathering, Huang referred to himself as a "great ambassador for Taiwan," expressing his gratitude for the island nation's role in Nvidia's growth and success."""
text_zh = "輝達(NVIDIA)執行長黃仁勳在GTC大會期間與矽谷台灣人餐敘,並在致詞時自詡為「很棒的台灣大使」。他說輝達和台灣一起成長,感謝台灣夥伴一路陪伴,「台灣拯救了輝達」。"

taiwan_llm_tokens_en = taiwan_llm_tokenizer.tokenize(text_en)
original_llama_tokens_en = original_llama_tokenizer.tokenize(text_en)

taiwan_llm_tokens_zh = taiwan_llm_tokenizer.tokenize(text_zh)
original_llama_tokens_zh = original_llama_tokenizer.tokenize(text_zh)

print(f"English text:")
print(f"Taiwan-LLM_v3_tokenizer: {len(taiwan_llm_tokens_en)} tokens")
print(f"Original LLaMA tokenizer: {len(original_llama_tokens_en)} tokens")

print(f"\nTraditional Mandarin text:")
print(f"Taiwan-LLM_v3_tokenizer: {len(taiwan_llm_tokens_zh)} tokens")
print(f"Original LLaMA tokenizer: {len(original_llama_tokens_zh)} tokens")

Training Data

The Mandarin SentencePiece model used in this tokenizer was trained on a diverse set of Traditional Mandarin text data, including:

  • Wikipedia articles
  • Legal documents
  • Online forum discussions
  • Cultural and historical texts

This ensures that the tokenizer is well-suited for a wide range of Traditional Mandarin language applications.

Tokenizer Merging Process

The tokenizer was created by following these steps:

  1. Load and preprocess the Traditional Mandarin text data
  2. Train a Mandarin SentencePiece model using the preprocessed text data
  3. Merge the Mandarin SentencePiece model with the LLaMA tokenizer

Acknowledgements

This tokenizer was created using the LLaMA tokenizer and a custom-trained Mandarin SentencePiece model. We would like to thank the authors of the LLaMA model and the Hugging Face team for their contributions to the NLP community.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .