Text Generation
Transformers
Safetensors
Japanese
English
mistral
conversational
text-generation-inference
4-bit precision
awq
TheBloke commited on
Commit
32bfa2a
1 Parent(s): 8bc45fb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +736 -0
README.md ADDED
@@ -0,0 +1,736 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: augmxnt/shisa-7b-v1
3
+ datasets:
4
+ - augmxnt/ultra-orca-boros-en-ja-v1
5
+ - Open-Orca/SlimOrca
6
+ - augmxnt/shisa-en-ja-dpo-v1
7
+ inference: false
8
+ language:
9
+ - ja
10
+ - en
11
+ license: apache-2.0
12
+ model_creator: AUGMXNT
13
+ model_name: Shisa 7B v1
14
+ model_type: mistral
15
+ prompt_template: '[INST] <<SYS>>
16
+
17
+ {system_message}
18
+
19
+ <</SYS>>
20
+
21
+ {prompt} [/INST]
22
+
23
+ '
24
+ quantized_by: TheBloke
25
+ ---
26
+ <!-- markdownlint-disable MD041 -->
27
+
28
+ <!-- header start -->
29
+ <!-- 200823 -->
30
+ <div style="width: auto; margin-left: auto; margin-right: auto">
31
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
32
+ </div>
33
+ <div style="display: flex; justify-content: space-between; width: 100%;">
34
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
35
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
36
+ </div>
37
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
38
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
39
+ </div>
40
+ </div>
41
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
42
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
43
+ <!-- header end -->
44
+
45
+ # Shisa 7B v1 - AWQ
46
+ - Model creator: [AUGMXNT](https://huggingface.co/augmxnt)
47
+ - Original model: [Shisa 7B v1](https://huggingface.co/augmxnt/shisa-7b-v1)
48
+
49
+ <!-- description start -->
50
+ ## Description
51
+
52
+ This repo contains AWQ model files for [AUGMXNT's Shisa 7B v1](https://huggingface.co/augmxnt/shisa-7b-v1).
53
+
54
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
55
+
56
+
57
+ ### About AWQ
58
+
59
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
60
+
61
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
62
+
63
+ It is supported by:
64
+
65
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
66
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
67
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
68
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
69
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
70
+
71
+ <!-- description end -->
72
+ <!-- repositories-available start -->
73
+ ## Repositories available
74
+
75
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/shisa-7B-v1-AWQ)
76
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/shisa-7B-v1-GPTQ)
77
+ * [AUGMXNT's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/augmxnt/shisa-7b-v1)
78
+ <!-- repositories-available end -->
79
+
80
+ <!-- prompt-template start -->
81
+ ## Prompt template: Llama-2-Chat
82
+
83
+ ```
84
+ [INST] <<SYS>>
85
+ {system_message}
86
+ <</SYS>>
87
+ {prompt} [/INST]
88
+
89
+ ```
90
+
91
+ <!-- prompt-template end -->
92
+
93
+
94
+ <!-- README_AWQ.md-provided-files start -->
95
+ ## Provided files, and AWQ parameters
96
+
97
+ I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
98
+
99
+ Models are released as sharded safetensors files.
100
+
101
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
102
+ | ------ | ---- | -- | ----------- | ------- | ---- |
103
+ | [main](https://huggingface.co/TheBloke/shisa-7B-v1-AWQ/tree/main) | 4 | 128 | [Shisa English Japanese DPO](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1/viewer/) | 4096 | 5.59 GB
104
+
105
+ <!-- README_AWQ.md-provided-files end -->
106
+
107
+ <!-- README_AWQ.md-text-generation-webui start -->
108
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
109
+
110
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
111
+
112
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
113
+
114
+ 1. Click the **Model tab**.
115
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/shisa-7B-v1-AWQ`.
116
+ 3. Click **Download**.
117
+ 4. The model will start downloading. Once it's finished it will say "Done".
118
+ 5. In the top left, click the refresh icon next to **Model**.
119
+ 6. In the **Model** dropdown, choose the model you just downloaded: `shisa-7B-v1-AWQ`
120
+ 7. Select **Loader: AutoAWQ**.
121
+ 8. Click Load, and the model will load and is now ready for use.
122
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
123
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
124
+ <!-- README_AWQ.md-text-generation-webui end -->
125
+
126
+ <!-- README_AWQ.md-use-from-vllm start -->
127
+ ## Multi-user inference server: vLLM
128
+
129
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
130
+
131
+ - Please ensure you are using vLLM version 0.2 or later.
132
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
133
+
134
+ For example:
135
+
136
+ ```shell
137
+ python3 -m vllm.entrypoints.api_server --model TheBloke/shisa-7B-v1-AWQ --quantization awq --dtype auto
138
+ ```
139
+
140
+ - When using vLLM from Python code, again set `quantization=awq`.
141
+
142
+ For example:
143
+
144
+ ```python
145
+ from vllm import LLM, SamplingParams
146
+
147
+ prompts = [
148
+ "Tell me about AI",
149
+ "Write a story about llamas",
150
+ "What is 291 - 150?",
151
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
152
+ ]
153
+ prompt_template=f'''[INST] <<SYS>>
154
+ {system_message}
155
+ <</SYS>>
156
+ {prompt} [/INST]
157
+ '''
158
+
159
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
160
+
161
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
162
+
163
+ llm = LLM(model="TheBloke/shisa-7B-v1-AWQ", quantization="awq", dtype="auto")
164
+
165
+ outputs = llm.generate(prompts, sampling_params)
166
+
167
+ # Print the outputs.
168
+ for output in outputs:
169
+ prompt = output.prompt
170
+ generated_text = output.outputs[0].text
171
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
172
+ ```
173
+ <!-- README_AWQ.md-use-from-vllm start -->
174
+
175
+ <!-- README_AWQ.md-use-from-tgi start -->
176
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
177
+
178
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
179
+
180
+ Example Docker parameters:
181
+
182
+ ```shell
183
+ --model-id TheBloke/shisa-7B-v1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
184
+ ```
185
+
186
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
187
+
188
+ ```shell
189
+ pip3 install huggingface-hub
190
+ ```
191
+
192
+ ```python
193
+ from huggingface_hub import InferenceClient
194
+
195
+ endpoint_url = "https://your-endpoint-url-here"
196
+
197
+ prompt = "Tell me about AI"
198
+ prompt_template=f'''[INST] <<SYS>>
199
+ {system_message}
200
+ <</SYS>>
201
+ {prompt} [/INST]
202
+ '''
203
+
204
+ client = InferenceClient(endpoint_url)
205
+ response = client.text_generation(prompt,
206
+ max_new_tokens=128,
207
+ do_sample=True,
208
+ temperature=0.7,
209
+ top_p=0.95,
210
+ top_k=40,
211
+ repetition_penalty=1.1)
212
+
213
+ print(f"Model output: ", response)
214
+ ```
215
+ <!-- README_AWQ.md-use-from-tgi end -->
216
+
217
+ <!-- README_AWQ.md-use-from-python start -->
218
+ ## Inference from Python code using Transformers
219
+
220
+ ### Install the necessary packages
221
+
222
+ - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
223
+ - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
224
+
225
+ ```shell
226
+ pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
227
+ ```
228
+
229
+ Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
230
+
231
+ If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
232
+
233
+ ```shell
234
+ pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
235
+ ```
236
+
237
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
238
+
239
+ ```shell
240
+ pip3 uninstall -y autoawq
241
+ git clone https://github.com/casper-hansen/AutoAWQ
242
+ cd AutoAWQ
243
+ pip3 install .
244
+ ```
245
+
246
+ ### Transformers example code (requires Transformers 4.35.0 and later)
247
+
248
+ ```python
249
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
250
+
251
+ model_name_or_path = "TheBloke/shisa-7B-v1-AWQ"
252
+
253
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
254
+ model = AutoModelForCausalLM.from_pretrained(
255
+ model_name_or_path,
256
+ low_cpu_mem_usage=True,
257
+ device_map="cuda:0"
258
+ )
259
+
260
+ # Using the text streamer to stream output one token at a time
261
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
262
+
263
+ prompt = "Tell me about AI"
264
+ prompt_template=f'''[INST] <<SYS>>
265
+ {system_message}
266
+ <</SYS>>
267
+ {prompt} [/INST]
268
+ '''
269
+
270
+ # Convert prompt to tokens
271
+ tokens = tokenizer(
272
+ prompt_template,
273
+ return_tensors='pt'
274
+ ).input_ids.cuda()
275
+
276
+ generation_params = {
277
+ "do_sample": True,
278
+ "temperature": 0.7,
279
+ "top_p": 0.95,
280
+ "top_k": 40,
281
+ "max_new_tokens": 512,
282
+ "repetition_penalty": 1.1
283
+ }
284
+
285
+ # Generate streamed output, visible one token at a time
286
+ generation_output = model.generate(
287
+ tokens,
288
+ streamer=streamer,
289
+ **generation_params
290
+ )
291
+
292
+ # Generation without a streamer, which will include the prompt in the output
293
+ generation_output = model.generate(
294
+ tokens,
295
+ **generation_params
296
+ )
297
+
298
+ # Get the tokens from the output, decode them, print them
299
+ token_output = generation_output[0]
300
+ text_output = tokenizer.decode(token_output)
301
+ print("model.generate output: ", text_output)
302
+
303
+ # Inference is also possible via Transformers' pipeline
304
+ from transformers import pipeline
305
+
306
+ pipe = pipeline(
307
+ "text-generation",
308
+ model=model,
309
+ tokenizer=tokenizer,
310
+ **generation_params
311
+ )
312
+
313
+ pipe_output = pipe(prompt_template)[0]['generated_text']
314
+ print("pipeline output: ", pipe_output)
315
+
316
+ ```
317
+ <!-- README_AWQ.md-use-from-python end -->
318
+
319
+ <!-- README_AWQ.md-compatibility start -->
320
+ ## Compatibility
321
+
322
+ The files provided are tested to work with:
323
+
324
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
325
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
326
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
327
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
328
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
329
+
330
+ <!-- README_AWQ.md-compatibility end -->
331
+
332
+ <!-- footer start -->
333
+ <!-- 200823 -->
334
+ ## Discord
335
+
336
+ For further support, and discussions on these models and AI in general, join us at:
337
+
338
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
339
+
340
+ ## Thanks, and how to contribute
341
+
342
+ Thanks to the [chirper.ai](https://chirper.ai) team!
343
+
344
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
345
+
346
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
347
+
348
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
349
+
350
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
351
+
352
+ * Patreon: https://patreon.com/TheBlokeAI
353
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
354
+
355
+ **Special thanks to**: Aemon Algiz.
356
+
357
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
358
+
359
+
360
+ Thank you to all my generous patrons and donaters!
361
+
362
+ And thank you again to a16z for their generous grant.
363
+
364
+ <!-- footer end -->
365
+
366
+ # Original model card: AUGMXNT's Shisa 7B v1
367
+
368
+ # Shisa 7B
369
+
370
+ ![Shi-chan and Sa-chan/シーちゃんとサーちゃん](https://huggingface.co/augmxnt/shisa-7b-v1/resolve/main/shisa.webp)
371
+
372
+ **Shisa 7B** (`shisa-7b-v1`) is a bilingual Japanese and English (JA/EN) general-purpose chat model that aims to achieve strong Japanese language performance while retaining robust English capabilities, using a synthetic-data driven approach.
373
+
374
+ This model is based on [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) with a custom JA-optimized extended tokenizer that is >2X more efficient in Japanese than Mistral's original tokenizer. The base model was pre-trained for an additional 8B primarily Japanese tokens. It was then subsequently fine-tuned with an expanded, machine-translated version of [airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1), a set of the highest-scoring items from [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), and additional freshly generated [airoboros](https://github.com/jondurbin/airoboros) data directly to the target languages.
375
+
376
+ We also release our base model, datasets, and pipeline code under a permissive Apache 2.0 license which can be used for any purpose, commercial or otherwise:
377
+ * [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - our base model w/ an extended tokenizer and additional JA pre-training
378
+ * [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - our pre-training data set
379
+ * [ultra-orca-boros-en-ja](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) - a synthetically generated, machine-translated, programmatically validated JA/EN fine-tuning dataset
380
+ * [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - Small subset of DPO pairs from ultrafeedback, along with JA DPO pairs using GPT-4 generated items as the chosen value, and outputs from our preliminary 7b model as the rejected values
381
+ * [Shisa repository](https://github.com/AUGMXNT/shisa) - this includes our translation, dataset generation, training, and evaluation code
382
+
383
+ Moreover, we are in the process of publishing extended writeups and more details of our process, including ablation results, testing methodology, and key findings [on our project wiki](https://github.com/AUGMXNT/shisa/wiki) that may be of interest to fellow researchers.
384
+
385
+ ## Fine-Tuning
386
+ Our original intuition was to see if we could create a stronger Japanese model using the best [existing public JA training sets](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets) and incorporating them. After initial review and testing, however, we decided that focusing solely on translation/generation of our own synthetic datasets could yield superior results with less training.
387
+
388
+ We compared multiple translation tools and, via manual review, judged that while `gpt-4` almost always delivered the highest quality translations, Google's `text-bison-32k` was a good balance of quality, cost and throughput. Over various iterations, we refined our translation approach to include some additional algorithms for flagging and filtering invalid translations, re-translating and backfilling as necessary.
389
+
390
+ We also took this project as an opportunity to apply some newer techniques such as incorporating [NEFTune](https://arxiv.org/abs/2310.05914) and [DPO](https://arxiv.org/abs/2305.18290) training.
391
+
392
+ For our v1 release, we picked from our release candidates based on a significant amount of human preference testing (thousands of generations and multiple rounds of pairwise comparisons). We analyzed our results with both win/loss/draw and [BTL modeling](https://datascience.oneoffcoder.com/btl-model.html) (iLSR) using [choix](https://github.com/lucasmaystre/choix)).
393
+
394
+
395
+ The best candidate model was fine-tuned in a 3-step process:
396
+
397
+ 1. First, the model was fine-tuned on `ultra-orca-boros-en-ja` and SlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))
398
+ 2. Next, we add one additional epoch at performed using only a subset of Japanese ultra-orca-boros-en-ja items to enhance JA performance (as SlimOrca from the first step is mostly EN) ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))
399
+ 3. Finally, the model was tuned using a DPOTrainer on a small subset of ultrafeedback (EN) and our own JA DPO dataset which uses gpt-4 outputs as the chosen values and outputs from stage 1's prelim model as rejected values. ([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
400
+
401
+ During our training process, we also gained some key insights on [why some existing Japanese models seem to underperform](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets#analysis) even versus models that have no additional JA training, and we hope that sharing this analysis will be useful to other teams developing Japanese language models.
402
+
403
+ While we need to explore this further, as an experimental validation, we applied a version of our fine-tuning set onto an existing base model ("Gamma 7B") and the initial JA MT-Bench results suggests that we can drastically increase functional performance with our tuning approach:
404
+
405
+ | Model | Score |
406
+ | ------------------------------ | ----- |
407
+ | shisa-gamma-7b-allsources-v0.4 | 5.65 |
408
+ | ja-stablelm-instruct-gamma-7b* | 4.01 |
409
+
410
+
411
+ ## Performance
412
+ Throughout our training, we did extensive human evaluation for each model to cross-validate our model performance, and we are currently conducting ongoing larger scale manual head-to-head testing between models. Our intention is open up and scale this data collection as we further develop our tools. For more information and updates, please see our [project wiki](https://github.com/AUGMXNT/shisa/wiki).
413
+
414
+ While we believe [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) is a useful metric for our [base model](https://huggingface.co/augmxnt/shisa-base-7b-v1), and it was extremely useful during our tuning process for initial validations, as our fine-tune training includes a percentage of the benchmark train splits, we provide these llm-jp-eval results primarily as a point of interest:
415
+
416
+ | AVR | MC | NLI | QA | RC |
417
+ |-------|-------|-------|-------|-------|
418
+ | 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
419
+
420
+ *(We run a [slightly modified llm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) to support testing of Qwen and to emit a `bos_token` if available)*
421
+
422
+ For our final model, since it's customary to include benchmarks, we've used Stability AI Japan's [Japanese MT-Bench](https://github.com/Stability-AI/FastChat) as a more representative test of our model's capabilities. For [our JA MT-Bench testing](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable) we use a Japanese prompt ("あなたは役立つアシスタントです。") as well as `--num-choices 4` in an effort to reduce sampling variability, however we've still observed regular 0.5+ point (and sometimes even greater swings) between generations, as well as issues with default prompts and parameters when testing, so again, we'd urge caution in over-interpreting these scores and treating them as more of a probabilistic directional indicator, rather than a definitive score or ranking:
423
+
424
+ | Benchmark | Score |
425
+ | ----------- | ----- |
426
+ | JA MT-Bench | 5.02 |
427
+ | MT-Bench | 5.71 |
428
+
429
+ There is an [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard), but as JA MT-Bench is still under development, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated by `gpt-4-0613`):
430
+
431
+ | Model | Score |
432
+ | ------------------------------------------------- | ---- |
433
+ | gpt-4-0613 | 9.40 |
434
+ | gpt-4-1106-preview | 9.17 |
435
+ | gpt-3.5-turbo* | 8.41 |
436
+ | Qwen-14B-Chat | 7.47 |
437
+ | **shisa-7b-v1** | **5.02** |
438
+ | ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
439
+ | ja-stablelm-instruct-gamma-7b* | 4.01 |
440
+ | japanese-stablelm-instruct-alpha-7b* | 2.74 |
441
+ | Mistral-7B-OpenOrca-ja* | 2.23 |
442
+ | youri-7b-chat* | 2.00 |
443
+ | Mistral-7B-Instruct-v0.1* | 1.78 |
444
+ | llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
445
+
446
+ *(Marked JA MT-Bench results in this section are [sourced from shi3z](https://note.com/shi3zblog/n/n6b2ac5874021))*
447
+
448
+ ## Limitations
449
+ Although our model demonstrates a reasonably high level of Japanese fluency, as a 7B parameter model, it is prone to higher hallucination rates and less effective instruction following and reasoning than larger-class models. Also, it still does not have complete mastery of the Japanese language and a native speaker will spot occasional mistakes like some non-idiomatic/awkward phrasing, improper tenses/speech levels, etc.
450
+
451
+ We've also noticed a small amount of language leakage, likely largely attributable to our tokenizer expansion. These may be fixable with sampler settings like [Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)) or additional targeted training, and we plan on doing additional work on automated detection/sampler sweeps in the future. One interesting observation is, based on our data collection, we found that as we iterated, the DPO process significantly exacerbated this issue, but also that our DPO models still had significantly higher human preference rates, so there was a bit of a trade-off in our choice of final tune.
452
+
453
+ While we believe that training larger models can improve performance using our existing approach and dataset, there are also many improvements we'd like to make for future models. We believe there is quite a bit of low hanging fruit for improving performance with even more training efficiency largely through improving the quality and construction of datasets.
454
+
455
+ ## Usage
456
+ Sample code:
457
+ ```python
458
+ import torch
459
+ from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
460
+
461
+ model_name = "augmxnt/shisa-7b-v1"
462
+
463
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
464
+ model = AutoModelForCausalLM.from_pretrained(
465
+ model_name,
466
+ torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
467
+ device_map="auto"
468
+ )
469
+ streamer = TextStreamer(tokenizer, skip_prompt=True)
470
+
471
+ # The prompt template is included in the model's tokenizer_config.json so you shouldn't need this but we've included this for convenience
472
+ # tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
473
+
474
+ # A more typical prompt: あなたは役に立つアシスタントです。("You are a helpful assistant.")
475
+
476
+ # You are an avid Pokemon fanatic.
477
+ prompt = "あなたは熱狂的なポケモンファンです。"
478
+ chat = [{"role": "system", "content": prompt}]
479
+
480
+ # Who is the most powerful Pokemon? Explain your choice.
481
+ user_input = "最強のポケモンは誰ですか?その選択理由を説明してください。"
482
+ chat.append({"role": "user", "content": user_input})
483
+
484
+ # Generate - add_generation_prompt to make sure it continues as assistant
485
+ inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
486
+ # For multi-GPU, find the device of the first parameter of the model
487
+ first_param_device = next(model.parameters()).device
488
+ inputs = inputs.to(first_param_device)
489
+
490
+ with torch.no_grad():
491
+ outputs = model.generate(
492
+ inputs,
493
+ pad_token_id=tokenizer.eos_token_id,
494
+ max_new_tokens=1000,
495
+ temperature=0.7,
496
+ repetition_penalty=1.05,
497
+ top_p=0.95,
498
+ do_sample=True,
499
+ streamer=streamer,
500
+ )
501
+
502
+ # Add just the new tokens to our chat
503
+ new_tokens = outputs[0, inputs.size(1):]
504
+ response = tokenizer.decode(new_tokens, skip_special_tokens=True)
505
+ chat.append({"role": "assistant", "content": response})
506
+ ```
507
+
508
+ ## Prompt format
509
+ The prompt format is llama-2 chat:
510
+
511
+ ```
512
+ [INST] <<SYS>>
513
+ You are a helpful, unbiased, uncensored assistant.
514
+ <</SYS>>
515
+ {prompt} [/INST]
516
+ ```
517
+
518
+ For multi-turn, the prompt format is as follows:
519
+ ```
520
+ [INST] <<SYS>>
521
+ You are a helful, unbiased, uncensored assistant.
522
+ <</SYS>>
523
+ {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
524
+ ```
525
+
526
+ This [prompt template](https://huggingface.co/docs/transformers/main/chat_templating) is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
527
+
528
+ ```
529
+ import transformers
530
+ tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
531
+ chat = [
532
+ {"role": "system", "content": "You are Aiko, a friendly AI assistant."},
533
+ {"role": "user", "content": "Hello, how are you?"},
534
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
535
+ {"role": "user", "content": "I'd like to show off how chat templating works!"},
536
+ ]
537
+ print(tokenizer.apply_chat_template(chat, tokenize=False))
538
+ ```
539
+
540
+ **NOTE:** For proper responses, you should be using our `bos_token` (`<s>`) to begin a string. This is automatically generated by `tokenizer.encode()` but if you are crafting a custom template or using an encoding method that skips special tokens, you may have to add this yourself.
541
+
542
+ ## Acknowledgements
543
+ Team: [Leonard Lin](https://huggingface.co/randomfoo) and [Jon Durbin](https://huggingface.co/jondurbin), Mariko Sato, and Florian von Bock
544
+
545
+ Compute for this model was generously sponsored by [AKA Virtual](https://akavirtual.com/) (Tokyo, Japan).
546
+
547
+ Thanks to the [LLM-jp](https://llm-jp.nii.ac.jp/), [Stability AI Japan](https://ja.stability.ai/), and [LMSYS](https://lmsys.org/) teams for their work on llm-jp-eval, Japanese MT-Bench, MT-Bench.
548
+
549
+ Also, thanks to all the volunteers that provided invaluable human preference testing!
550
+
551
+ We are actively looking for additional compute as we train better and larger models for this project. Please drop us a line at: *compute at augmxnt dot com*
552
+
553
+ ---
554
+ *(GPT-4によって非常に軽微な編集を加えて翻訳されました)*
555
+
556
+ # シーサー7B
557
+
558
+ **シーサー7B**(`shisa-7b-v1`)は、合成データ駆動のアプローチを用いて、優れた日本語と英語能力を両立することを目指すバイリンガル(日本語/英語)汎用チャットモデルです。
559
+
560
+ このモデルは、[Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)を基に、Mistralのオリジナルのトークナイザーよりも日本語において2倍以上効率的な、日本語最適化拡張トークナイザーをカスタムして作成されました。ベースモデルは、主に日本語のトークンを追加で80億ものトレーニングを行いました。そして、その後、[airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1)の拡張された機械翻訳版、[ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)からの最高得点項目のセット、そして新たに生成された[airoboros](https://github.com/jondurbin/airoboros)のデータを直接目標言語で微調整しています。
561
+
562
+ 商用を含むあらゆる目的で使用可能な寛容なApache 2.0ライセンスの下で、ベースモデル、データセット、およびパイプラインコードも公開しています:
563
+ * [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - 拡張トークナイザーと追加の日本語プレトレーニングを備えた当方のベースモデル
564
+ * [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - 当方のプレトレーニングデータセット
565
+ * [ultra-orca-boros-en-ja](https://huggingface.co/datasets/jondurbin/ultra-orca-boros-en-ja) - 合成生成、機械翻訳、プログラムによる検証によるJA/EN微調整データセット
566
+ * [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - ultrafeedbackからのDPOペアの小さなサブセットと、選択された値としてGPT-4生成項目を使用した日本語のDPOペア、そして初期の7ビリオンモデルの出力を却下した値
567
+ * [シーサーリポジトリ](https://github.com/AUGMXNT/shisa) - 翻訳、データセットの生成、トレーニング、評価コードなどが含まれています
568
+
569
+ さらに、アブレーション結果、テスト方法論、主要な調査結果など、プロセスの詳細や拡張ライトアップを公開する過程にあります。これは[当プロジェクトwiki](https://github.com/AUGMXNT/shisa/wiki)で研究者に興味深い情報として提供されています。
570
+
571
+ ## 微調整
572
+
573
+ 最初の直感は、最良の[既存の公開日本語トレーニングセット](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets)を使用して、それらを組み入れることでより強力な日本語モデルを作成できるかどうかを見ることでした。しかし、初期の検討とテストの後、自らの合成データセットの翻訳/生成にだけ焦点を当てることで、短期間のトレーニングで優れた結果を得ることができると結論付けました。
574
+
575
+ 私たちは複数の翻訳ツールを比較し、手動でレビューを行った結果、`gpt-4`がほぼ常に最高品質の翻訳を提供しながら、Googleの `text-bison-32k`は品質、コスト、スループットのバランスが良いと判断しました。複数の繰り返しを経て、無効な翻訳のフラグ付けとフィルタリング、必要に応じた再翻訳とバックフィルのための追加のアルゴリズムを含むように、翻訳アプローチを洗練させました。
576
+
577
+ また、このプロジェクトを[NEFTune](https://arxiv.org/abs/2310.05914)と[DPO](https://arxiv.org/abs/2305.18290)トレーニングを取り入れるなど、新しい技術を適用する機会ともなりました。
578
+
579
+ v1リリースのために、私たちは大量の人間の嗜好テスト(数千の生成と複数ラウンドのペアワイズ比較)に基づいてリリース候補から選択しました。私たちは、勝ち/負け/引き分けと、[BTLモデル](https://datascience.oneoffcoder.com/btl-model.html)(iLSR)を使用して[choix](https://github.com/lucasmaystre/choix)で結果を分析しました。
580
+
581
+ 最良の候補モデルは、3ステップのプロセスで微調整されました:
582
+
583
+ 1. 最初に、モデルは`ultra-orca-boros-en-ja`とSlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))で微調整されました。
584
+ 2. 次に、日本語のパフォーマンスを向上させるためにultra-orca-boros-en-jaの一部を使用して1回追加のエポックを追加しました(最初の段階のSlimOrcaは主に英語)([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))。
585
+ 3. 最後に、モデルは小規模のultrafeedback(英語)と自身のJA DPOデータセットに対してDPOTrainerを使用して調整されました。ここで使用したJA DPOデータセットはgpt-4の出力を選出された値とし、ステージ1の予備モデルの出力を却下した値とします。([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
586
+
587
+ 私たちのトレーニングプロセス中に、何故一部の既存の日本語モデルが、追加の日本語トレーニングがないモデルに対してもパフォーマンスが低いのか、といういくつかの重要な洞察を得ることができました。この分析結果を共有すれば、他のチームが日本語モデルを開発する際の参考になると思います。
588
+
589
+ さらに探求する必要はありますが、実験的な検証として、微調整セットのバージョンを既存のベースモデル("Gamma 7B")に適用し、初期のJA MT-Bench結果が示すように、私たちのチューニングアプローチで機能性のパフォーマンスを劇的に向上させることができました:
590
+
591
+ | モデル | スコア |
592
+ | ------------------------------ | ----- |
593
+ | shisa-gamma-7b-allsources-v0.4 | 5.65 |
594
+ | ja-stablelm-instruct-gamma-7b* | 4.01 |
595
+
596
+ ## パフォーマンス
597
+ トレーニング全体を通じて、各モデルについて人間による評価を行い、モデルのパフォーマンスを相互に検証しました。現在、モデル間の手動での比較テストを大規模に行っています。私たちの目指すところは、ツールをさらに発展させることでこのデータ収集を公開して拡張することです。詳細と更新情報については、[プロジェクトwiki](https://github.com/AUGMXNT/shisa/wiki) をご覧ください。
598
+
599
+ 我々は、[llm-jp-eval](https://github.com/llm-jp/llm-jp-eval)は、私たちの[基本モデル](https://huggingface.co/augmxnt/shisa-base-7b-v1)の有用な指標であり、初期の検証のための微調整プロセス中に非常に役立つと考えていますが、微調整トレーニングにはベンチマークのトレイン分割の一部が含まれているため、私たちが提供するllm-jp-evalの結果は主に興味深いポイントとして提供しています:
600
+
601
+ | AVR | MC | NLI | QA | RC |
602
+ |-------|-------|-------|-------|-------|
603
+ | 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
604
+
605
+ *(Qwenのテストをサポートし、可能であれば`bos_token`を発行するために、[わずかに修正したllm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) を実行しています)*
606
+
607
+ 最終モデルについては、ベンチマークを含めるのが一般的なため、私たちのモデルの能力をより代表的にテストするために、Stability AI Japanの[Japanese MT-Bench](https://github.com/Stability-AI/FastChat)を使用しました。[私たちのJA MT-Bench テスト](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable)では、サンプリング変動を減らすために、日本語のプロンプト("あなたは役立つアシスタントです。")と `--num-choices 4`を使用していますが、生成間で0.5+点(時にはそれ以上の変動)を頻繁に観察し、テスト時のデフォルトのプロンプトとパラメータに問題があったという経験から、これらのスコアを過度に解釈することには注意が必要で、これらを確定的なスコアやランキングではなく、より確率的な方向指標として扱うことをお勧めします:
608
+
609
+ | ベンチマーク | スコア |
610
+ | ----------- | ----- |
611
+ | JA MT-Bench | 5.02 |
612
+ | MT-Bench | 5.71 |
613
+
614
+ [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)がありますが、JA MT-Benchはまだ開発中であるため、便宜上、他のモデルのJA MT-Benchスコアとの比較を示します(私たちのスコアは`gpt-4-0613`によって評価されました):
615
+
616
+ | モデル | スコア |
617
+ | ------------------------------------------------- | ---- |
618
+ | gpt-4-0613 | 9.40 |
619
+ | gpt-4-1106-preview | 9.17 |
620
+ | gpt-3.5-turbo* | 8.41 |
621
+ | Qwen-14B-Chat | 7.47 |
622
+ | **shisa-7b-v1** | **5.02** |
623
+ | ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
624
+ | ja-stablelm-instruct-gamma-7b* | 4.01 |
625
+ | japanese-stablelm-instruct-alpha-7b* | 2.74 |
626
+ | Mistral-7B-OpenOrca-ja* | 2.23 |
627
+ | youri-7b-chat* | 2.00 |
628
+ | Mistral-7B-Instruct-v0.1* | 1.78 |
629
+ | llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
630
+
631
+ *(このセクションでマークされたJA MT-Benchの結果は[shi3zから引用](https://note.com/shi3zblog/n/n6b2ac5874021)しました)*
632
+
633
+ ## 制限事項
634
+ 当モデルは十分な日本語の流暢さを示していますが、7Bパラメータのモデルとしては、より大きなクラスのモデルに比べて幻覚率が高く、指示の追跡や推論が効果的でない傾向があります。また、日本語の完全な習得はまだ達しておらず、ネイティブスピーカーはたまに非慣用的/違和感のある表現や不適切な時制/話し言葉のレベルなどの間違いを見つけることがあります。
635
+
636
+ また、私たちのトークナイザーの拡張に大いに起因する可能性が高いが、わずかな言語リークを確認しています。これらは[Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)などのサンプラー設定や追加のターゲット指向型トレーニングで修正可能な可能性があり、今後、自動検出/サンプラーのスウィープについて追加の作業を行う予定です。興味深い観察としては、私たちのデータ収集に基づいて、DPOプロセスがこの問題を大幅に悪化させることがわかりましたが、それでもDPOモデルは人間の好み率が大幅に高かったため、最終的な微調整の選択には一定のトレードオフがありました。
637
+
638
+ 現存する���プローチとデータセットを使用して、大規模なモデルのトレーニングがパフォーマンスを向上させると信じていますが、今後のモデル向けに行いたい改良も多くあります。私たちは、データセットの品質と構築を改善することで、さらなるトレーニング効率を通じたパフォーマンス向上にはまだ相当に取り組む余地があると考えています。
639
+
640
+ ## 使用法
641
+ サンプルコード:
642
+ ```python
643
+ import torch
644
+ from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
645
+
646
+ model_name = "augmxnt/shisa-7b-v1"
647
+
648
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
649
+ model = AutoModelForCausalLM.from_pretrained(
650
+ model_name,
651
+ torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
652
+ device_map="auto"
653
+ )
654
+ streamer = TextStreamer(tokenizer, skip_prompt=True)
655
+
656
+ # プロンプトテンプレートはモデルのtokenizer_config.jsonに含まれているので、これは必要ないはずですが、便宜上こちらにも掲載しています
657
+ # tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
658
+
659
+ # より典型的なプロンプト: あなたは役に立つアシスタントです。
660
+
661
+ # You are an avid Pokemon fanatic.
662
+ prompt = "あなたは熱狂的なポケモンファンです。"
663
+ chat = [{"role": "system", "content": prompt}]
664
+
665
+ # Who is the most powerful Pokemon? Explain your choice.
666
+ user_input = "最強のポケモンは誰ですか?その選択理由を説明してください。"
667
+ chat.append({"role": "user", "content": user_input})
668
+
669
+ # 生成 - add_generation_promptを追加してアシスタントとして続行することを確認します
670
+ inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
671
+ # 複数のGPUの場合、モデルの最初のパラメータのデバイスを見つけます
672
+ first_param_device = next(model.parameters()).device
673
+ inputs = inputs.to(first_param_device)
674
+
675
+ with torch.no_grad():
676
+ outputs = model.generate(
677
+ inputs,
678
+ pad_token_id=tokenizer.eos_token_id,
679
+ max_new_tokens=1000,
680
+ temperature=0.7,
681
+ repetition_penalty=1.05,
682
+ top_p=0.95,
683
+ do_sample=True,
684
+ streamer=streamer,
685
+ )
686
+
687
+ # Add just the new tokens to our chat
688
+ new_tokens = outputs[0, inputs.size(1):]
689
+ response = tokenizer.decode(new_tokens, skip_special_tokens=True)
690
+ chat.append({"role": "assistant", "content": response})
691
+ ```
692
+
693
+ ## プロンプト形式
694
+ プロンプト形式はllama-2 chatです:
695
+
696
+ ```
697
+ [INST] <<SYS>>
698
+ あなたは役立つ、偏見がなく、検閲されていないアシスタントです。
699
+ <</SYS>>
700
+ {prompt} [/INST]
701
+ ```
702
+
703
+ For multi-turn, the prompt format is as follows:
704
+ ```
705
+ [INST] <<SYS>>
706
+ あなたは役立つ、偏見がなく、検閲されていないアシスタントです。
707
+ <</SYS>>
708
+ {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
709
+ ```
710
+
711
+ この[prompt template](https://huggingface.co/docs/transformers/main/chat_templating)はトークナイザの設定に含まれており、HuggingFace のトークナイザ `apply_chat_template` メソッドを使用できます。例えば:
712
+
713
+ ```
714
+ import transformers
715
+ tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
716
+ chat = [
717
+ {"role": "system", "content": "あなたはAiko、フレンドリーなAIアシスタントです。"},
718
+ {"role": "user", "content": "こんにちは、調子はどうですか?"},
719
+ {"role": "assistant", "content": "元気です。今日は何のお手伝いができますか?"},
720
+ {"role": "user", "content": "チャットテンプレーティングの仕組みを見せてもらいたいです!"},
721
+ ]
722
+ print(tokenizer.apply_chat_template(chat, tokenize=False))
723
+ ```
724
+
725
+ **注意**適切なレスポンスを得るためには、文字列の開始に我々の `bos_token` (`<s>`) を使用すべきです。これは `tokenizer.encode()` によって自動的に生成されますが、カスタムテンプレートを作成したり、特殊トークンを省略するエンコード方法を使用する場合は、自分で追加する必要があります。
726
+
727
+ ## 謝辞
728
+ チーム:[Leonard Lin](https://huggingface.co/randomfoo)、[Jon Durbin](https://huggingface.co/jondurbin)、佐藤真理子、Florian von Bock
729
+
730
+ このモデルの計算は、[AKA Virtual](https://akavirtual.com/) (東京、日本) のご厚意により提供されていま��。
731
+
732
+ [LLM-jp](https://llm-jp.nii.ac.jp/)、[Stability AI Japan](https://ja.stability.ai/)、[LMSYS](https://lmsys.org/)のチームが、llm-jp-eval, Japanese MT-Bench, MT-Benchに取り組んでくれて感謝しています。
733
+
734
+ また、貴重なヒューマンプリファレンステストを提供してくださったすべてのボランティアにも感謝いたします!
735
+
736
+ このプロジェクトのためにより良く、より大きなモデルを訓練するために、追加の計算を積極的に探しています。お問い合わせは次の宛先までお願いいたします:*compute at augmxnt dot com*