YC-Chen commited on
Commit
eb0b8b7
1 Parent(s): 26b50a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -20,13 +20,13 @@ Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that
20
  The current release version of Breeze-7B is v0.1.
21
 
22
  Practicality-wise:
23
- - Breeze expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
24
- - Breeze-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
25
- - In particular, Breeze-Instruct-64k can perform tasks at a document level, not a chapter level.
26
 
27
  Performance-wise:
28
- - Breeze demonstrates impressive performance in benchmarks for Traditional Chinese, when compared to similar sized open-source contemporaries such as Taiwan-LLM, QWen, and Yi. [See [Chat Model Performance](#chat-model-performance).]
29
- - Breeze shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](#chat-model-performance).]
30
 
31
 
32
  *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
 
20
  The current release version of Breeze-7B is v0.1.
21
 
22
  Practicality-wise:
23
+ - Breeze-7B expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
24
+ - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
25
+ - In particular, Breeze-7B-Instruct-64k can perform tasks at a document level, not a chapter level.
26
 
27
  Performance-wise:
28
+ - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese, when compared to similar sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).]
29
+ - Breeze-7B-Instruct shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](#chat-model-performance).]
30
 
31
 
32
  *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*