YC-Chen commited on
Commit
731c46c
·
verified ·
1 Parent(s): 95cfb47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -4,14 +4,15 @@ pipeline_tag: text-generation
4
 
5
  # Model Card for Breeze-7B-Instruct-v0.1
6
 
7
- Breeze-7B-Instruct-v0.1 is a 7-billion-parameter language model built from Mistral-7B and tailored for Traditional Chinese (TC).
8
- This model expands the TC vocabulary (extra 30k TC tokens) based on the original Mistral-7B to better adapt to TC and improve inference speed,
9
- resulting in a doubling of the original tokenizer's inference speed.
10
- To the best of our knowledge, this is the first work on vocabulary expansion in TC.
11
- This model uses 250GB of TC data for continued pre-training and uses over 1M instances for further supervised fine-tuning.
12
- Breeze-7B-Instruct-v0.1 performs well on both EN and TC benchmarks.
13
- This model outperforms Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat, and Yi-6B-Chat on all TC benchmarks
14
- and is comparable with Mistral-7B-Instruct-v0.1 on MMLU and MT-Bench in English.
 
15
 
16
  *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
17
 
 
4
 
5
  # Model Card for Breeze-7B-Instruct-v0.1
6
 
7
+
8
+ [Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) is a language model that builds upon the foundation of Mistral-7B, specifically enhanced for Traditional Chinese. This model introduces an expanded vocabulary with additional 30,000 Traditional Chinese tokens, significantly improving its performance in understanding and generating Traditional Chinese text. As a result, the model is twice as efficient in the encoding and decoding of Traditional Chinese compared to Mistral-7B.
9
+
10
+
11
+ [Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model Breeze-7B-Base-v0.1, which has been continually pre-trained on a substantial dataset of 250GB of Traditional Chinese content. Additionally, it has undergone supervised fine-tuning with over 1 million instances to sharpen its capabilities. Breeze-7B-Instruct-v0.1 demonstrates impressive performance in benchmarks for both English and Traditional Chinese, surpassing the results of Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat and Qwen-7B-chat in Traditional Chinese assessments. It also excels in some benchmarks against Yi-6B-Chat. In English evaluations, Breeze-7B-Instruct-v0.1 shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. This achievement marks a significant milestone as it is the first instance of vocabulary expansion in a model tailored for Traditional Chinese.
12
+
13
+
14
+ [Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is an extension to the Breeze-7B-v-0.1 model to enable 64k context length, which is equivalent to 88k Traditional Chinese characters.
15
+
16
 
17
  *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
18