TheBloke commited on
Commit
4fd415f
โ€ข
1 Parent(s): 4aaa42c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +332 -0
README.md ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: https://huggingface.co/lightblue/openorca_stx
3
+ datasets:
4
+ - snow_simplified_japanese_corpus
5
+ - khalidalt/tydiqa-goldp
6
+ - csebuetnlp/xlsum
7
+ inference: false
8
+ language:
9
+ - ja
10
+ license: llama2
11
+ model_creator: Lightblue Technology Inc.
12
+ model_name: OpenOrca Stx
13
+ model_type: llama
14
+ prompt_template: '{prompt}
15
+
16
+ '
17
+ quantized_by: TheBloke
18
+ ---
19
+
20
+ <!-- header start -->
21
+ <!-- 200823 -->
22
+ <div style="width: auto; margin-left: auto; margin-right: auto">
23
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
24
+ </div>
25
+ <div style="display: flex; justify-content: space-between; width: 100%;">
26
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
27
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
28
+ </div>
29
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
31
+ </div>
32
+ </div>
33
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
34
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
35
+ <!-- header end -->
36
+
37
+ # OpenOrca Stx - AWQ
38
+ - Model creator: [Lightblue Technology Inc.](https://huggingface.co/lightblue)
39
+ - Original model: [OpenOrca Stx](https://huggingface.co/lightblue/openorca_stx)
40
+
41
+ <!-- description start -->
42
+ ## Description
43
+
44
+ This repo contains AWQ model files for [Lightblue Technology Inc.'s OpenOrca Stx](https://huggingface.co/lightblue/openorca_stx).
45
+
46
+
47
+ ### About AWQ
48
+
49
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
50
+
51
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
52
+ <!-- description end -->
53
+ <!-- repositories-available start -->
54
+ ## Repositories available
55
+
56
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenOrca_Stx-AWQ)
57
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenOrca_Stx-GPTQ)
58
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrca_Stx-GGUF)
59
+ * [Lightblue Technology Inc.'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lightblue/openorca_stx)
60
+ <!-- repositories-available end -->
61
+
62
+ <!-- prompt-template start -->
63
+ ## Prompt template: None
64
+
65
+ ```
66
+ {prompt}
67
+
68
+ ```
69
+
70
+ <!-- prompt-template end -->
71
+
72
+
73
+ <!-- README_AWQ.md-provided-files start -->
74
+ ## Provided files and AWQ parameters
75
+
76
+ For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
77
+
78
+ Models are released as sharded safetensors files.
79
+
80
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
81
+ | ------ | ---- | -- | ----------- | ------- | ---- |
82
+ | [main](https://huggingface.co/TheBloke/OpenOrca_Stx-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
83
+
84
+ <!-- README_AWQ.md-provided-files end -->
85
+
86
+ <!-- README_AWQ.md-use-from-vllm start -->
87
+ ## Serving this model from vLLM
88
+
89
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
90
+
91
+ - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
92
+
93
+ ```shell
94
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/OpenOrca_Stx-AWQ --quantization awq
95
+ ```
96
+
97
+ When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
98
+
99
+ ```python
100
+ from vllm import LLM, SamplingParams
101
+
102
+ prompts = [
103
+ "Hello, my name is",
104
+ "The president of the United States is",
105
+ "The capital of France is",
106
+ "The future of AI is",
107
+ ]
108
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
109
+
110
+ llm = LLM(model="TheBloke/OpenOrca_Stx-AWQ", quantization="awq")
111
+
112
+ outputs = llm.generate(prompts, sampling_params)
113
+
114
+ # Print the outputs.
115
+ for output in outputs:
116
+ prompt = output.prompt
117
+ generated_text = output.outputs[0].text
118
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
119
+ ```
120
+ <!-- README_AWQ.md-use-from-vllm start -->
121
+
122
+ <!-- README_AWQ.md-use-from-python start -->
123
+ ## How to use this AWQ model from Python code
124
+
125
+ ### Install the necessary packages
126
+
127
+ Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
128
+
129
+ ```shell
130
+ pip3 install autoawq
131
+ ```
132
+
133
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
134
+
135
+ ```shell
136
+ pip3 uninstall -y autoawq
137
+ git clone https://github.com/casper-hansen/AutoAWQ
138
+ cd AutoAWQ
139
+ pip3 install .
140
+ ```
141
+
142
+ ### You can then try the following example code
143
+
144
+ ```python
145
+ from awq import AutoAWQForCausalLM
146
+ from transformers import AutoTokenizer
147
+
148
+ model_name_or_path = "TheBloke/OpenOrca_Stx-AWQ"
149
+
150
+ # Load model
151
+ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
152
+ trust_remote_code=False, safetensors=True)
153
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
154
+
155
+ prompt = "Tell me about AI"
156
+ prompt_template=f'''{prompt}
157
+
158
+ '''
159
+
160
+ print("\n\n*** Generate:")
161
+
162
+ tokens = tokenizer(
163
+ prompt_template,
164
+ return_tensors='pt'
165
+ ).input_ids.cuda()
166
+
167
+ # Generate output
168
+ generation_output = model.generate(
169
+ tokens,
170
+ do_sample=True,
171
+ temperature=0.7,
172
+ top_p=0.95,
173
+ top_k=40,
174
+ max_new_tokens=512
175
+ )
176
+
177
+ print("Output: ", tokenizer.decode(generation_output[0]))
178
+
179
+ # Inference can also be done using transformers' pipeline
180
+ from transformers import pipeline
181
+
182
+ print("*** Pipeline:")
183
+ pipe = pipeline(
184
+ "text-generation",
185
+ model=model,
186
+ tokenizer=tokenizer,
187
+ max_new_tokens=512,
188
+ do_sample=True,
189
+ temperature=0.7,
190
+ top_p=0.95,
191
+ top_k=40,
192
+ repetition_penalty=1.1
193
+ )
194
+
195
+ print(pipe(prompt_template)[0]['generated_text'])
196
+ ```
197
+ <!-- README_AWQ.md-use-from-python end -->
198
+
199
+ <!-- README_AWQ.md-compatibility start -->
200
+ ## Compatibility
201
+
202
+ The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
203
+
204
+ [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
205
+ <!-- README_AWQ.md-compatibility end -->
206
+
207
+ <!-- footer start -->
208
+ <!-- 200823 -->
209
+ ## Discord
210
+
211
+ For further support, and discussions on these models and AI in general, join us at:
212
+
213
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
214
+
215
+ ## Thanks, and how to contribute
216
+
217
+ Thanks to the [chirper.ai](https://chirper.ai) team!
218
+
219
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
220
+
221
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
222
+
223
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
224
+
225
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
226
+
227
+ * Patreon: https://patreon.com/TheBlokeAI
228
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
229
+
230
+ **Special thanks to**: Aemon Algiz.
231
+
232
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
233
+
234
+
235
+ Thank you to all my generous patrons and donaters!
236
+
237
+ And thank you again to a16z for their generous grant.
238
+
239
+ <!-- footer end -->
240
+
241
+ # Original model card: Lightblue Technology Inc.'s OpenOrca Stx
242
+
243
+ # About
244
+ This model is Lightblue's QLoRA finetune of OpenOrca's [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) model on Japanese fine-tuning datasets.
245
+
246
+ This model specialises on answering **Closed Question Answering** in Japanese. Input a piece of reference text, ask a question, and see the model answer based on the reference text.
247
+
248
+ We trained on equal samples of the following three datasets:
249
+ * [SNOW](https://huggingface.co/datasets/snow_simplified_japanese_corpus)
250
+ * [TyDiQA (Ja)](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
251
+ * [XLSUM (Ja)](https://huggingface.co/datasets/csebuetnlp/xlsum)
252
+
253
+ which resulted in a dataset of 13,167 samples total.
254
+
255
+ These three datasets were chosen as they represent three distinct fine-tuning tasks (Text simplification, question answering, and text summarization, respectively) which we hypothesize can help to improve the language models suitability for dealing with Japanese data.
256
+ These three datasets make up the model name: STX.
257
+
258
+ With these datasets, we achieve the following scores on the JGLUE benchmark:
259
+
260
+ | Model Name | Open-Orca/OpenOrcaxOpenChat-Preview2-13B | lightblue/openorca_stx |
261
+ |------------------------|------------------------------------------|------------------------|
262
+ | jsquad-1.1-0.3 | 0.692 | 0.836 |
263
+ | jcommonsenseqa-1.1-0.3 | 0.831 | 0.782 |
264
+ | jnli-1.1-0.3 | 0.504 | 0.48 |
265
+ | marc_ja-1.1-0.3 | 0.936 | 0.959 |
266
+
267
+ Our model achieves much better results on the question answering benchmark (JSQuAD) than the base checkpoint without monstrous degradation of performance on multi-choice question benchmarks (JCommonSense, JNLI, MARC-Ja) purely through QLoRA training.
268
+ This shows the potential for applying strong language models such as [Open-Orca/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) to minimal QLoRA fine-tuning using Japanese fine-tuning datasets to achieve better results at narrow NLP tasks.
269
+
270
+ # How to use
271
+
272
+ ```python
273
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
274
+
275
+ tokenizer = AutoTokenizer.from_pretrained(model_dir)
276
+ model = AutoModelForCausalLM.from_pretrained(
277
+ model_dir, torch_dtype=torch.bfloat16, device_map='auto',
278
+ )
279
+
280
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
281
+
282
+ def do_closed_qa(context, question):
283
+ return context + "\n\n" + question
284
+
285
+ test_article = """ใ€€ใƒขใƒŽใƒžใƒใฎใƒฌใƒ‘ใƒผใƒˆใƒชใƒผใซใ€Œใƒชใƒผใƒใƒปใƒžใ‚คใ‚ฑใƒซ้ธๆ‰‹ใ€ใŒใ‚ใ‚‹ใƒฌใ‚คใ‚ถใƒผใƒฉใƒขใƒณRGใ•ใ‚“ใ€‚ๆœฌไบบๅ…ฌ่ชใฎใƒขใƒŽใƒžใƒใงใ™ใŒใ€ใƒฉใ‚ฐใƒ“ใƒผใƒ•ใ‚กใƒณใฎๅๅฟœใซๅฐ‘ใ—้ฉšใ„ใŸใใ†ใงใ™ใ€‚
286
+ ใ€€ใƒชใƒผใƒใƒปใƒžใ‚คใ‚ฑใƒซ้ธๆ‰‹ใฎใƒขใƒŽใƒžใƒใฏใ€ไฝ•ใŒใใฃใ‹ใ‘ใงใ™ใ‹ใ€‚
287
+ ใ€Œ2015ๅนดใฎใƒฏใƒผใƒซใƒ‰ใ‚ซใƒƒใƒ—๏ผˆWๆฏ๏ผ‰ใ‚คใƒณใ‚ฐใƒฉใƒณใƒ‰ๅคงไผšใงๆ—ฅๆœฌใŒๅ—ใ‚ขใƒ•ใƒชใ‚ซใ‚’ๅ€’ใ—ใŸๆฌกใฎๆ—ฅใŒใ€ไบฌ้ƒฝใงใฎ็•ช็ต„ใƒญใ‚ฑใงใ—ใŸใ€‚ๅฝ“ๆ™‚ใฏใ€ใ‚ขใƒƒใƒ—ใƒซใฎๅ…ฑๅŒๅ‰ตๆฅญ่€…ใ‚นใƒ†ใ‚ฃใƒผใƒ–ใƒปใ‚ธใƒงใƒ–ใ‚บใฎใƒขใƒŽใƒžใƒใฐใ‹ใ‚Šใงใ—ใŸใŒใ€ไธ€็ท’ใซใƒญใ‚ฑใ‚’ใ—ใฆใ„ใŸใ‚ธใƒฃใƒณใ‚ฐใƒซใƒใ‚ฑใƒƒใƒˆใ‹ใ‚‰ใ€Žใƒชใƒผใƒใƒปใƒžใ‚คใ‚ฑใƒซใซไผผใฆใพใ™ใ‚ˆใ€‚ใ‚ธใƒงใƒ–ใ‚บใฎใพใพใ€ใ„ใ‘ใ‚‹ใ‚“ใ˜ใ‚ƒใชใ„ใงใ™ใ‹๏ผŸใ€ใจ่จ€ใ‚ใ‚ŒใŸใฎใŒๅง‹ใพใ‚Šใงใ™ใ€
288
+ ใ€ŒใŸใ ใ€ใฟใ‚“ใช็Ÿฅ่ญ˜ใŒใชใ„ใ€‚ใƒฉใ‚ฐใƒ“ใƒผใ‚ทใƒงใƒƒใƒ—ใ‚’ๆŽขใ—ใ€ๆ—ฅๆœฌไปฃ่กจใฎใƒฆใƒ‹ใƒ›ใƒผใƒ ใŒๅฃฒใ‚Šๅˆ‡ใ‚Œใ ใฃใŸใฎใงใ€่ตคใฃใฝใ„ใƒฆใƒ‹ใƒ›ใƒผใƒ ใจใƒ”ใƒใƒ”ใƒใฎ็Ÿญใƒ‘ใƒณใ‚’ใฏใ„ใฆใ€‚ใจใ‚Šใ‚ใˆใšSNSใงใ€Žใƒชใƒผใƒใƒปใƒžใ‚คใ‚ฑใƒซใงใ™ใ€ใฃใฆใ„ใฃใฑใ„ๅ†™็œŸใ‚’่ผ‰ใ›ใพใ—ใŸใ€
289
+ ใ€Œใ™ใ‚‹ใจใ€ใใ‚Œใ‚’่ฆ‹ใŸใƒชใƒผใƒใ•ใ‚“ๆœฌไบบใ‹ใ‚‰DM๏ผˆใƒ€ใ‚คใƒฌใ‚ฏใƒˆใƒกใƒƒใ‚ปใƒผใ‚ธ๏ผ‰ใŒๅฑŠใใพใ—ใŸใ€‚ใ€ŽใƒขใƒŽใƒžใƒใ‚ใ‚ŠใŒใจใ†ใ”ใ–ใ„ใพใ™ใ€‚ใ‚‚ใ—ใƒขใƒŽใƒžใƒใ‚’ใ™ใ‚‹ใชใ‚‰ใ€ๅƒ•ใฎใƒฆใƒ‹ใƒ›ใƒผใƒ ใ‚’้€ใ‚Šใพใ™ใฎใง็€ใฆใใ ใ•ใ„ใ€ใจใ€‚WๆฏๅพŒใซใƒฆใƒ‹ใƒ›ใƒผใƒ 2็€ใจใƒ‘ใƒณใƒ„ใ‚„ใ‚ฝใƒƒใ‚ฏใ‚นใชใฉใ‚’ใปใ‚“ใพใซ้€ใฃใฆใใฆใใ‚Œใพใ—ใŸใ€‚ไปŠ็€ใฆใ„ใ‚‹ใฎใŒใใ‚Œใงใ™ใ€
290
+ ใ“ใ‚Œใพใงใ€ๆ•ฐใ€…ใฎ่‘—ๅไบบใ‚’ใƒขใƒŽใƒžใƒใ—ใฆใ“ใ‚‰ใ‚Œใพใ—ใŸใ€‚ใƒชใƒผใƒ้ธๆ‰‹ใฎใƒใ‚ฟใฎๅ้Ÿฟใฏใ„ใ‹ใŒใงใ—ใŸใ‹ใ€‚
291
+ ใ€€ใ€Œๅƒ•ใฏใƒฉใ‚ฐใƒ“ใƒผ็ตŒ้จ“ใŒใชใ„ใงใ™ใ—ใ€ใƒฉใ‚ฐใƒ“ใƒผใ‚’ๅ…จ็„ถ็Ÿฅใ‚‰ใชใ‹ใฃใŸใ‘ใฉใ€ใ‚„ใฃใฑใ‚Šๆœฌไบบใ‹ใ‚‰ใƒฆใƒ‹ใƒ›ใƒผใƒ ใ‚’้ ‚ใ„ใฆใ‚‹ใฃใฆใ„ใ†โ€œๅฐ็ฑ ๏ผˆใ„ใ‚“ใ‚ใ†๏ผ‰โ€ใฟใŸใ„ใชใฎใŒใ‚ใฃใฆใ€‚ใ€Žใ‚ใ„ใคใฏใƒชใƒผใƒใ•ใ‚“ๆœฌไบบใซ่ชใ‚ใ‚‰ใ‚Œใฆใ‚‹ใ€ใจใ€‚ไธ€็›ฎ็ฝฎใ‹ใ‚Œใฆใ„ใ‚‹ใฎใ‹ใชใจๆ„Ÿใ˜ใพใ™ใ€
292
+ ใ€€ใ€Œใ‚„ใฃใฆใ„ใ‚‹ใ“ใจใฏใ€่ฆ‹ใŸ็›ฎใ‚’ๆœฌไบบใซๅฏ„ใ›ใฆใƒฏใƒณใƒใƒผใƒ ใฃใฆ่จ€ใ†ใ ใ‘ใชใ‚“ใงใ™ใ‘ใฉใญใ€‚ใใ‚Œใงใ‚‚ใ€Žใ‚ใ‚ใ€ใƒชใƒผใƒใ•ใ‚“ใ ใ€ใจ่จ€ใฃใฆใ‚‚ใ‚‰ใˆใพใ™ใ€
293
+ ใ€€ใ€Œใƒชใƒผใƒใ•ใ‚“ใจๅฎŸ้š›ใซไผšใ†ใ“ใจใชใ‚“ใฆใ€็ฐกๅ˜ใซใฏใงใใชใ„ใ˜ใ‚ƒใชใ„ใงใ™ใ‹ใ€‚ใงใ‚‚ใ€ใƒชใƒผใƒใ•ใ‚“ใฎใพใญใ‚’ใ—ใฆใ„ใ‚‹RGใซใฏไผšใˆใŸใ‚ใ€ใฟใŸใ„ใช๏ผˆ็ฌ‘๏ผ‰ใ€‚ไฝ•ใ ใ‚ใ†ใชใ€ๆœ‰ๅใช็ฅž็คพใฎๆ”ฏ็คพใฎใ‚ˆใ†ใชๅญ˜ๅœจใงใ™ใ‹ใญใ€‚ใ‚ใ‚ŠใŒใŸใŒใ‚‰ใ‚Œใ‚‹ใจใ„ใ†ๆ„ๅ‘ณใงใฏไป–ใฎใƒขใƒŽใƒžใƒใจใฏใ™ใ”ใ้•ใ„ใพใ™ใญใ€
294
+ """
295
+
296
+ test_question = "ใ€€ใƒชใƒผใƒใƒปใƒžใ‚คใ‚ฑใƒซใฏไฝ•ใ‚’้€ใฃใฆใใพใ—ใŸใ‹๏ผŸ"
297
+
298
+ pipe(do_closed_qa(test_article, question), max_new_tokens=128, temperature=0)[0]["generated_text"]
299
+ # "ใƒฆใƒ‹ใƒ›ใƒผใƒ 2็€ใจใƒ‘ใƒณใƒ„ใ‚„ใ‚ฝใƒƒใ‚ฏใ‚นใชใฉ"
300
+ ```
301
+
302
+
303
+ # Training details
304
+
305
+ This model was trained for 1000 steps (1.2 epochs) with the model being evaluated every 50 steps. We then chose the best model from these evaluations based on validation loss.
306
+ We used the [qlora](https://github.com/artidoro/qlora) package from artidoro.
307
+ We trained with the following hyperparameters:
308
+
309
+ ```
310
+ Per device evaluation batch size: 16
311
+ Per device train batch size: 8
312
+ LoRA (lora_r): 64
313
+ LoRA alpha (lora_alpha): 16
314
+ LoRA modules: all
315
+ Double quantization: Enabled
316
+ Quantization type: nf4
317
+ BF16: Enabled
318
+ Bits: 4
319
+ Warmup ratio: 0.03
320
+ Learning rate scheduler type: Constant
321
+ Gradient checkpointing: Enabled
322
+ Gradient accumulation steps: 2
323
+ Learning rate: 0.0002
324
+ Adam beta2: 0.999
325
+ Maximum gradient norm: 0.3
326
+ LoRA dropout: 0.05
327
+ Weight decay: 0.0
328
+ ```
329
+
330
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/UWiE7z5tG8t_vdSFrb5WC.png)
331
+
332
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_fKBf9sdq9UAKKYMxM6ad.png)