Text Generation
GGUF
English
chat
Inference Endpoints
CISCai commited on
Commit
c5a8f50
1 Parent(s): 5a74039

Upload 13 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ CodeQwen1.5-7B-Chat.imatrix.dat filter=lfs diff=lfs merge=lfs -text
37
+ CodeQwen1.5-7B-Chat.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ CodeQwen1.5-7B-Chat.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ CodeQwen1.5-7B-Chat.IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ CodeQwen1.5-7B-Chat.IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ CodeQwen1.5-7B-Chat.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
42
+ CodeQwen1.5-7B-Chat.IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
43
+ CodeQwen1.5-7B-Chat.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ CodeQwen1.5-7B-Chat.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ CodeQwen1.5-7B-Chat.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
46
+ CodeQwen1.5-7B-Chat.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
47
+ CodeQwen1.5-7B-Chat.IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
CodeQwen1.5-7B-Chat.IQ1_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b86d8e751b23d47d11e79a584df4355486127559d4070d13db0147352e51ecbf
3
+ size 2458142624
CodeQwen1.5-7B-Chat.IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7447b616905febab817c6645c39b0a38bd6e752a2056aa52d25649b45382a229
3
+ size 2361411488
CodeQwen1.5-7B-Chat.IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0c111b64365cf2638f0f57b0125a4184a38d9f41b3e51a25018a3caece38f31
3
+ size 3008030624
CodeQwen1.5-7B-Chat.IQ2_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d1ab2e22c0e741e70cb3cd88954786b3ec9642cbc544b6500b2f830ada3b03b
3
+ size 2879055776
CodeQwen1.5-7B-Chat.IQ2_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:857eee63ac30516b4fcce0fd6abe911ebc675a9884556bd2082781d97435cfff
3
+ size 2765113248
CodeQwen1.5-7B-Chat.IQ2_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8255935cebf7e41ea76655c1686a36285304f4748ceef6fed21e69316c353a82
3
+ size 2619361184
CodeQwen1.5-7B-Chat.IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b001b56efc09202c1d0037a1de2d0a47b3d52688679470eea16f41f8bdee0e3
3
+ size 3608545184
CodeQwen1.5-7B-Chat.IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d757a4af40804806765790c9ea068432d67127dfe5c220a159273609c64e8594
3
+ size 3509716896
CodeQwen1.5-7B-Chat.IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90587e247cff3827f233f2e7d292c170cdb605a5c5410e99e86e92e911427820
3
+ size 3357542304
CodeQwen1.5-7B-Chat.IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2cf12384b04118574d7c4c7db5616ccb7a8b89bb767d7f12b31b7b02b6ec783
3
+ size 3228231584
CodeQwen1.5-7B-Chat.IQ4_NL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51406beddbba5e34e8dd75e5e8e2efd90f93b362aec8f1b851ca78ebd5561c07
3
+ size 4187826080
CodeQwen1.5-7B-Chat.imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cebe28e9d02a9cd7676e685613a516268c650b618174ae438f3738349806717
3
+ size 4873438
README.md CHANGED
@@ -1,5 +1,268 @@
1
  ---
 
2
  license: other
3
  license_name: tongyi-qianwen
4
- license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
 
 
 
 
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/CodeQwen1.5-7B-Chat
3
  license: other
4
  license_name: tongyi-qianwen
5
+ license_link: >-
6
+ https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
7
+ language:
8
+ - en
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - chat
12
+ model_creator: Qwen
13
+ model_name: CodeQwen1.5-7B-Chat
14
+ model_type: qwen2
15
+ quantized_by: CISC
16
  ---
17
+
18
+ # CodeQwen1.5-7B-Chat - SOTA GGUF
19
+ - Model creator: [Qwen](https://huggingface.co/Qwen)
20
+ - Original model: [CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat)
21
+
22
+ <!-- description start -->
23
+ ## Description
24
+
25
+ This repo contains State Of The Art quantized GGUF format model files for [CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat).
26
+
27
+ Quantization was done with an importance matrix that was trained for ~1M tokens (256 batches of 4096 tokens) of answers from the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) dataset.
28
+
29
+ NOTE: Due to the majority of tensors in Qwen2 models being oddly shaped a consequential portion of the quantization fell back to IQ4_NL instead of the specified method, causing significantly larger (and "smarter"; even IQ1_S is perfectly usable) model files than usual!
30
+
31
+ <!-- description end -->
32
+
33
+
34
+ <!-- prompt-template start -->
35
+ ## Prompt template: ChatML
36
+
37
+ ```
38
+ <|im_start|>system
39
+ {system_prompt}<|im_end|>
40
+ <|im_start|>user
41
+ {prompt}<|im_end|>
42
+ <|im_start|>assistant
43
+ ```
44
+
45
+ <!-- prompt-template end -->
46
+
47
+
48
+ <!-- compatibility_gguf start -->
49
+ ## Compatibility
50
+
51
+ These quantised GGUFv3 files are compatible with llama.cpp from February 27th 2024 onwards, as of commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307)
52
+
53
+ They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.
54
+
55
+ ## Explanation of quantisation methods
56
+
57
+ <details>
58
+ <summary>Click to see details</summary>
59
+
60
+ The new methods available are:
61
+
62
+ * GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw)
63
+ * GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw
64
+ * GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw
65
+ * GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw
66
+ * GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw
67
+ * GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw
68
+ * GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw
69
+ * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
70
+ * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
71
+ * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
72
+ * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw
73
+ * GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw
74
+
75
+ Refer to the Provided Files table below to see what files use which methods, and how.
76
+ </details>
77
+ <!-- compatibility_gguf end -->
78
+
79
+ <!-- README_GGUF.md-provided-files start -->
80
+ ## Provided files
81
+
82
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
83
+ | ---- | ---- | ---- | ---- | ---- | ----- |
84
+ | [CodeQwen1.5-7B-Chat.IQ1_S.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ1_S.gguf) | IQ1_S | 1 | 2.2 GB| 2.4 GB | smallest, significant quality loss |
85
+ | [CodeQwen1.5-7B-Chat.IQ1_M.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ1_M.gguf) | IQ1_M | 1 | 2.3 GB| 2.5 GB | very small, significant quality loss |
86
+ | [CodeQwen1.5-7B-Chat.IQ2_XXS.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ2_XXS.gguf) | IQ2_XXS | 2 | 2.5 GB| 2.7 GB | very small, high quality loss |
87
+ | [CodeQwen1.5-7B-Chat.IQ2_XS.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ2_XS.gguf) | IQ2_XS | 2 | 2.6 GB| 2.8 GB | very small, high quality loss |
88
+ | [CodeQwen1.5-7B-Chat.IQ2_S.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ2_S.gguf) | IQ2_S | 2 | 2.7 GB| 2.9 GB | small, substantial quality loss |
89
+ | [CodeQwen1.5-7B-Chat.IQ2_M.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ2_M.gguf) | IQ2_M | 2 | 2.9 GB| 3.1 GB | small, greater quality loss |
90
+ | [CodeQwen1.5-7B-Chat.IQ3_XXS.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ3_XXS.gguf) | IQ3_XXS | 3 | 3.1 GB| 3.3 GB | very small, high quality loss |
91
+ | [CodeQwen1.5-7B-Chat.IQ3_XS.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ3_XS.gguf) | IQ3_XS | 3 | 3.2 GB| 3.4 GB | small, substantial quality loss |
92
+ | [CodeQwen1.5-7B-Chat.IQ3_S.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ3_S.gguf) | IQ3_S | 3 | 3.3 GB| 3.5 GB | small, greater quality loss |
93
+ | [CodeQwen1.5-7B-Chat.IQ3_M.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ3_M.gguf) | IQ3_M | 3 | 3.4 GB| 3.6 GB | medium, balanced quality - recommended |
94
+ | [CodeQwen1.5-7B-Chat.IQ4_NL.gguf](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.IQ4_NL.gguf) | IQ4_NL | 4 | 4.0 GB| 4.2 GB | small, substantial quality loss |
95
+
96
+ Generated importance matrix file: [CodeQwen1.5-7B-Chat.imatrix.dat](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF/blob/main/CodeQwen1.5-7B-Chat.imatrix.dat)
97
+
98
+ **Note**: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
99
+
100
+ <!-- README_GGUF.md-provided-files end -->
101
+
102
+ <!-- README_GGUF.md-how-to-run start -->
103
+ ## Example `llama.cpp` command
104
+
105
+ Make sure you are using `llama.cpp` from commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0becb22ac05b6542bd9d5f2235691aa1d3d4d307) or later.
106
+
107
+ ```shell
108
+ ./main -ngl 33 -m CodeQwen1.5-7B-Chat.IQ2_XS.gguf --color -c 65536 --temp 1.0 --repeat-penalty 1.0 --top-p 0.95 -n -1 -p ""<|im_start|>system\nYou are a helpful assistant<|im_end|>\n<|im_start|>\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
109
+ ```
110
+
111
+ Change `-ngl 33` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
112
+
113
+ Change `-c 65536` to the desired sequence length.
114
+
115
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
116
+
117
+ If you are low on V/RAM try quantizing the K-cache with `-ctk q8_0` or even `-ctk q4_0` for big memory savings (depending on context size).
118
+ There is a similar option for V-cache (`-ctv`), however that is [not working yet](https://github.com/ggerganov/llama.cpp/issues/4425).
119
+
120
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
121
+
122
+ ## How to run from Python code
123
+
124
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) module.
125
+
126
+ ### How to load this model in Python code, using llama-cpp-python
127
+
128
+ For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/).
129
+
130
+ #### First install the package
131
+
132
+ Run one of the following commands, according to your system:
133
+
134
+ ```shell
135
+ # Prebuilt wheel with basic CPU support
136
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
137
+ # Prebuilt wheel with NVidia CUDA acceleration
138
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.)
139
+ # Prebuilt wheel with Metal GPU acceleration
140
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
141
+ # Build base version with no GPU acceleration
142
+ pip install llama-cpp-python
143
+ # With NVidia CUDA acceleration
144
+ CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
145
+ # Or with OpenBLAS acceleration
146
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
147
+ # Or with CLBLast acceleration
148
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
149
+ # Or with AMD ROCm GPU acceleration (Linux only)
150
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
151
+ # Or with Metal GPU acceleration for macOS systems only
152
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
153
+ # Or with Vulkan acceleration
154
+ CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python
155
+ # Or with Kompute acceleration
156
+ CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python
157
+ # Or with SYCL acceleration
158
+ CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python
159
+
160
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
161
+ $env:CMAKE_ARGS = "-DLLAMA_CUDA=on"
162
+ pip install llama-cpp-python
163
+ ```
164
+
165
+ #### Simple llama-cpp-python example code
166
+
167
+ ```python
168
+ from llama_cpp import Llama
169
+
170
+ # Chat Completion API
171
+
172
+ llm = Llama(model_path="./CodeQwen1.5-7B-Chat.IQ2_XS.gguf", n_gpu_layers=33, n_ctx=65536)
173
+ print(llm.create_chat_completion(
174
+ messages = [
175
+ {"role": "system", "content": "You are an expert AI coding assistant."},
176
+ {
177
+ "role": "user",
178
+ "content": "Pick a LeetCode challenge and solve it in Python."
179
+ }
180
+ ]
181
+ ))
182
+ ```
183
+
184
+ <!-- README_GGUF.md-how-to-run end -->
185
+
186
+ <!-- original-model-card start -->
187
+ # CodeQwen1.5-7B-Chat
188
+
189
+
190
+ ## Introduction
191
+
192
+ CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes.
193
+
194
+ * Strong code generation capabilities and competitve performance across a series of benchmarks;
195
+ * Supporting long context understanding and generation with the context length of 64K tokens;
196
+ * Supporting 92 coding languages
197
+ * Excellent performance in text-to-SQL, bug fix, etc.
198
+
199
+
200
+ For more details, please refer to our [blog post](https://qwenlm.github.io/blog/codeqwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
201
+
202
+ ## Model Details
203
+ CodeQwen1.5 is based on Qwen1.5, a language model series including decoder language models of different model sizes. It is trained on 3 trillion tokens of data of codes, and it includes group query attention (GQA) for efficient inference.
204
+
205
+
206
+ ## Requirements
207
+ The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
208
+ ```
209
+ KeyError: 'qwen2'.
210
+ ```
211
+
212
+ ## Quickstart
213
+
214
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
215
+
216
+ ```python
217
+ from transformers import AutoModelForCausalLM, AutoTokenizer
218
+ device = "cuda" # the device to load the model onto
219
+
220
+ model = AutoModelForCausalLM.from_pretrained(
221
+ "Qwen/CodeQwen1.5-7B-Chat",
222
+ torch_dtype="auto",
223
+ device_map="auto"
224
+ )
225
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/CodeQwen1.5-7B-Chat")
226
+
227
+ prompt = "Write a quicksort algorithm in python."
228
+ messages = [
229
+ {"role": "system", "content": "You are a helpful assistant."},
230
+ {"role": "user", "content": prompt}
231
+ ]
232
+ text = tokenizer.apply_chat_template(
233
+ messages,
234
+ tokenize=False,
235
+ add_generation_prompt=True
236
+ )
237
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
238
+
239
+ generated_ids = model.generate(
240
+ model_inputs.input_ids,
241
+ max_new_tokens=512
242
+ )
243
+ generated_ids = [
244
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
245
+ ]
246
+
247
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
248
+ ```
249
+
250
+
251
+
252
+ ## Tips
253
+
254
+ * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
255
+
256
+
257
+ ## Citation
258
+
259
+ If you find our work helpful, feel free to give us a cite.
260
+
261
+ ```
262
+ @article{qwen,
263
+ title={Qwen Technical Report},
264
+ author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
265
+ journal={arXiv preprint arXiv:2309.16609},
266
+ year={2023}
267
+ }
268
+ ```