chienweichang commited on
Commit
e1a3573
1 Parent(s): 3f9c651

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ language:
4
+ - zh
5
+ - en
6
+ license: apache-2.0
7
+ tags:
8
+ - text-generation-inference
9
+ - llama
10
+ - gguf
11
+ base_model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
12
+ ---
13
+
14
+ ## Description
15
+
16
+ This repo contains GGUF format model files for [MediaTek-Research/Breeze-7B-32k-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Instruct-v1_0).
17
+
18
+
19
+ ### About GGUF
20
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
21
+
22
+ ## Provided files
23
+ | Name | Quant method | Bits | Size | Use case |
24
+ | ---- | ---- | ---- | ---- | ---- |
25
+ | [Breeze-7B-32k-Instruct-v1_0-Q4_K_M.gguf](https://huggingface.co/chienweichang/Breeze-7B-32k-Instruct-v1_0-GGUF/blob/main/Breeze-7B-32k-Instruct-v1_0-Q4_K_M.gguf) | Q4_K_M | 4 | 4.54 GB| medium, balanced quality - recommended |
26
+ | [Breeze-7B-32k-Instruct-v1_0-Q5_0.gguf](https://huggingface.co/chienweichang/Breeze-7B-32k-Instruct-v1_0-GGUF/blob/main/Breeze-7B-32k-Instruct-v1_0-Q5_0.gguf) | Q5_0 | 5 | 5.18 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
27
+ | [Breeze-7B-32k-Instruct-v1_0-Q5_K_M.gguf](https://huggingface.co/chienweichang/Breeze-7B-32k-Instruct-v1_0-GGUF/blob/main/Breeze-7B-32k-Instruct-v1_0-Q5_K_M.gguf) | Q5_K_M | 5 | 5.32 GB| large, very low quality loss - recommended |
28
+ | [Breeze-7B-32k-Instruct-v1_0-Q5_K_S.gguf](https://huggingface.co/chienweichang/Breeze-7B-32k-Instruct-v1_0-GGUF/blob/main/Breeze-7B-32k-Instruct-v1_0-Q5_K_S.gguf) | Q5_K_S | 5 | 5.18 GB| large, low quality loss - recommended |
29
+ | [Breeze-7B-32k-Instruct-v1_0-Q6_K.gguf](https://huggingface.co/chienweichang/Breeze-7B-32k-Instruct-v1_0-GGUF/blob/main/Breeze-7B-32k-Instruct-v1_0-Q6_K.gguf) | Q6_K | 6 | 6.14 GB| very large, extremely low quality loss |
30
+
31
+ ## Original model card
32
+
33
+ ---
34
+
35
+ # Model Card for MediaTek Research Breeze-7B-32k-Instruct-v1_0
36
+
37
+ MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use.
38
+
39
+ [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series.
40
+ It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case.
41
+
42
+ [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
43
+
44
+ [Breeze-7B-32k-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0) is extended from the base model with more data, base change, and the disabling of the sliding window.
45
+ Roughly speaking, that is equivalent to 44k Traditional Chinese characters.
46
+
47
+ [Breeze-7B-32k-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Instruct-v1_0) derives from the base model Breeze-7B-32k-Base, making the resulting model amenable to be used as-is for commonly seen tasks.
48
+
49
+
50
+
51
+ Practicality-wise:
52
+ - Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
53
+ - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization.
54
+ - Breeze-7B-32k-Instruct can perform tasks at a document level (For Chinese, 20 ~ 40 pages).
55
+
56
+ *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
57
+
58
+ ## Features
59
+
60
+ - Breeze-7B-32k-Base-v1_0
61
+ - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
62
+ - 32k-token context length
63
+
64
+ - Breeze-7B-32k-Instruct-v1_0
65
+ - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese
66
+ - 32k-token context length
67
+ - Multi-turn dialogue (without special handling for harmfulness)
68
+
69
+ ## Model Details
70
+
71
+ - Breeze-7B-32k-Base-v1_0
72
+ - Pretrained from: [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0)
73
+ - Model type: Causal decoder-only transformer language model
74
+ - Language: English and Traditional Chinese (zh-tw)
75
+ - Breeze-7B-32k-Instruct-v1_0
76
+ - Finetuned from: [Breeze-7B-32k-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0)
77
+ - Model type: Causal decoder-only transformer language model
78
+ - Language: English and Traditional Chinese (zh-tw)
79
+
80
+ ## Long-context Performance
81
+
82
+ #### Needle-in-a-haystack Performance
83
+
84
+ We use the passkey retrieval task to test the model's ability to attend to different various depths in a given sequence.
85
+ A key in placed within a long context distracting document for the model to retrieve.
86
+ The key position is binned into 16 bins, and there are 20 testcases for each bin.
87
+ Breeze-7B-32k-Base clears the tasks with 90+% accuracy, shown in the figure below.
88
+ ![Needle-in-a-haystack Performance](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0/resolve/main/needle-in-a-haystack-performance.png)
89
+
90
+ #### Long-DRCD Performance
91
+
92
+ | **Model/Performance(EM)** | **DRCD** | **DRCD-16k** | **DRCD-32k** |
93
+ |---------------------------|----------|--------------|--------------|
94
+ | **Breeze-7B-32k-Instruct-v1\_0** | 76.9 | 54.82 | 44.26 |
95
+ | **Breeze-7B-32k-Base-v1\_0** | 79.73 | 69.68 | 61.55 |
96
+ | **Breeze-7B-Base-v1\_0** | 80.61 | 21.79 | 15.29 |
97
+
98
+ #### Short-Benchmark Performance
99
+
100
+ | **Model/Performance(EM)** | **TMMLU+** | **MMLU** | **TABLE** | **MT-Bench-tw** | **MT-Bench** |
101
+ |---------------------------|----------|--------------|--------------|-----|-----|
102
+ | **Breeze-7B-32k-Instruct-v1\_0** | 41.37 | 61.34 | 34 | 5.8 | 7.4 |
103
+ | **Breeze-7B-Instruct-v1\_0** | 42.67 | 62.73 | 39.58 | 6.0 | 7.4 |
104
+
105
+ ## Use in Transformers
106
+
107
+ First, install direct dependencies:
108
+ ```
109
+ pip install transformers torch accelerate
110
+ ```
111
+ <p style="color:red;">Flash-attention2 is strongly recommended for long context scenarios.</p>
112
+
113
+ ```bash
114
+ pip install packaging ninja
115
+ pip install flash-attn
116
+ ```
117
+ Then load the model in transformers:
118
+ ```python
119
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer
120
+ >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-32k-Instruct-v1_0/")
121
+ >>> model = AutoModelForCausalLM.from_pretrained(
122
+ >>> "MediaTek-Research/Breeze-7B-32k-Instruct-v1_0",
123
+ ... device_map="auto",
124
+ ... torch_dtype=torch.bfloat16,
125
+ ... attn_implementation="flash_attention_2"
126
+ ... )
127
+ >>> chat = [
128
+ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"},
129
+ ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"},
130
+ ... {"role": "user", "content": "太棒了!"},
131
+ ... ]
132
+ >>> tokenizer.apply_chat_template(chat, tokenize=False)
133
+ "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] "
134
+ # Tokenized results
135
+ # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?']
136
+ # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。']
137
+ # ['▁', '太', '棒', '了', '!']
138
+ ```
139
+
140
+
141
+
142
+ ## Citation
143
+
144
+ ```
145
+ @article{MediaTek-Research2024breeze7b,
146
+ title={Breeze-7B Technical Report},
147
+ author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu},
148
+ year={2024},
149
+ eprint={2403.02712},
150
+ archivePrefix={arXiv},
151
+ primaryClass={cs.CL}
152
+ }
153
+ ```