yentinglin's picture
Update README.md
d22c443
metadata
license: apache-2.0
language:
  - zh
widget:
  - text: >-
      A chat between a curious user and an artificial intelligence assistant.
      The assistant gives helpful, detailed, and polite answers to the user's
      questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
  Name: text
  Mail: text
  Organization: text
  Country: text
  Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
  使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
Taiwan LLM Logo

🌟 Checkout Taiwan-LLM Demo Chat-UI 🌟

Model Card for Taiwan LLM 7B v2.0 base

Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan. Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning. This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances. It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance. For detailed insights into Taiwan LLM's development and features, refer to our technical report.

Model description

  • Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
  • Language(s) (NLP): Primarily Traditional Chinese (zh-tw)
  • Finetuned from model: meta-llama/Llama-2-7b-hf

Model Sources

Performance

image/png

Intended uses

You should fine-tuned this model for instruction-following / chat application.

Training hyperparameters

image/png

image/png

image/png

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 5.0

Citation

If you find Taiwan LLM is useful in your work, please cite it with:

@misc{lin2023taiwan,
      title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model}, 
      author={Yen-Ting Lin and Yun-Nung Chen},
      year={2023},
      eprint={2311.17487},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Acknowledgement

Taiwan LLM v2 is conducted in collaboration with Ubitus K.K.. Ubitus provides valuable compute resources for the project.