TheBloke commited on
Commit
a381c43
1 Parent(s): 0a0ee17

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +480 -0
README.md ADDED
@@ -0,0 +1,480 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: akjindal53244/Arithmo-Mistral-7B
3
+ datasets:
4
+ - akjindal53244/Arithmo-Data
5
+ inference: false
6
+ language:
7
+ - en
8
+ license: apache-2.0
9
+ model_creator: Ashvini Kumar Jindal
10
+ model_name: Arithmo Mistral 7B
11
+ model_type: mistral
12
+ prompt_template: 'Question: {prompt}
13
+
14
+ Answer:
15
+
16
+ '
17
+ quantized_by: TheBloke
18
+ tags:
19
+ - Mathematical Reasoning
20
+ ---
21
+ <!-- markdownlint-disable MD041 -->
22
+
23
+ <!-- header start -->
24
+ <!-- 200823 -->
25
+ <div style="width: auto; margin-left: auto; margin-right: auto">
26
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
27
+ </div>
28
+ <div style="display: flex; justify-content: space-between; width: 100%;">
29
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
31
+ </div>
32
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
34
+ </div>
35
+ </div>
36
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
37
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
38
+ <!-- header end -->
39
+
40
+ # Arithmo Mistral 7B - AWQ
41
+ - Model creator: [Ashvini Kumar Jindal](https://huggingface.co/akjindal53244)
42
+ - Original model: [Arithmo Mistral 7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B)
43
+
44
+ <!-- description start -->
45
+ ## Description
46
+
47
+ This repo contains AWQ model files for [Ashvini Kumar Jindal's Arithmo Mistral 7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
48
+
49
+
50
+ ### About AWQ
51
+
52
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
53
+
54
+ It is supported by:
55
+
56
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
57
+ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
58
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
59
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
60
+
61
+ <!-- description end -->
62
+ <!-- repositories-available start -->
63
+ ## Repositories available
64
+
65
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Arithmo-Mistral-7B-AWQ)
66
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Arithmo-Mistral-7B-GPTQ)
67
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Arithmo-Mistral-7B-GGUF)
68
+ * [Ashvini Kumar Jindal's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B)
69
+ <!-- repositories-available end -->
70
+
71
+ <!-- prompt-template start -->
72
+ ## Prompt template: QA
73
+
74
+ ```
75
+ Question: {prompt}
76
+ Answer:
77
+
78
+ ```
79
+
80
+ <!-- prompt-template end -->
81
+
82
+
83
+ <!-- README_AWQ.md-provided-files start -->
84
+ ## Provided files, and AWQ parameters
85
+
86
+ For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
87
+
88
+ Models are released as sharded safetensors files.
89
+
90
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
91
+ | ------ | ---- | -- | ----------- | ------- | ---- |
92
+ | [main](https://huggingface.co/TheBloke/Arithmo-Mistral-7B-AWQ/tree/main) | 4 | 128 | [CamelAI Math](https://huggingface.co/datasets/andersonbcdefg/math) | 4096 | 4.15 GB
93
+
94
+ <!-- README_AWQ.md-provided-files end -->
95
+
96
+ <!-- README_AWQ.md-text-generation-webui start -->
97
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
98
+
99
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
100
+
101
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
102
+
103
+ 1. Click the **Model tab**.
104
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Arithmo-Mistral-7B-AWQ`.
105
+ 3. Click **Download**.
106
+ 4. The model will start downloading. Once it's finished it will say "Done".
107
+ 5. In the top left, click the refresh icon next to **Model**.
108
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Arithmo-Mistral-7B-AWQ`
109
+ 7. Select **Loader: AutoAWQ**.
110
+ 8. Click Load, and the model will load and is now ready for use.
111
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
112
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
113
+ <!-- README_AWQ.md-text-generation-webui end -->
114
+
115
+ <!-- README_AWQ.md-use-from-vllm start -->
116
+ ## Multi-user inference server: vLLM
117
+
118
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
119
+
120
+ - Please ensure you are using vLLM version 0.2 or later.
121
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
122
+
123
+ For example:
124
+
125
+ ```shell
126
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/Arithmo-Mistral-7B-AWQ --quantization awq
127
+ ```
128
+
129
+ - When using vLLM from Python code, again set `quantization=awq`.
130
+
131
+ For example:
132
+
133
+ ```python
134
+ from vllm import LLM, SamplingParams
135
+
136
+ prompts = [
137
+ "Tell me about AI",
138
+ "Write a story about llamas",
139
+ "What is 291 - 150?",
140
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
141
+ ]
142
+ prompt_template=f'''Question: {prompt}
143
+ Answer:
144
+ '''
145
+
146
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
147
+
148
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
149
+
150
+ llm = LLM(model="TheBloke/Arithmo-Mistral-7B-AWQ", quantization="awq", dtype="auto")
151
+
152
+ outputs = llm.generate(prompts, sampling_params)
153
+
154
+ # Print the outputs.
155
+ for output in outputs:
156
+ prompt = output.prompt
157
+ generated_text = output.outputs[0].text
158
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
159
+ ```
160
+ <!-- README_AWQ.md-use-from-vllm start -->
161
+
162
+ <!-- README_AWQ.md-use-from-tgi start -->
163
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
164
+
165
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
166
+
167
+ Example Docker parameters:
168
+
169
+ ```shell
170
+ --model-id TheBloke/Arithmo-Mistral-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
171
+ ```
172
+
173
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
174
+
175
+ ```shell
176
+ pip3 install huggingface-hub
177
+ ```
178
+
179
+ ```python
180
+ from huggingface_hub import InferenceClient
181
+
182
+ endpoint_url = "https://your-endpoint-url-here"
183
+
184
+ prompt = "Tell me about AI"
185
+ prompt_template=f'''Question: {prompt}
186
+ Answer:
187
+ '''
188
+
189
+ client = InferenceClient(endpoint_url)
190
+ response = client.text_generation(prompt,
191
+ max_new_tokens=128,
192
+ do_sample=True,
193
+ temperature=0.7,
194
+ top_p=0.95,
195
+ top_k=40,
196
+ repetition_penalty=1.1)
197
+
198
+ print(f"Model output: ", response)
199
+ ```
200
+ <!-- README_AWQ.md-use-from-tgi end -->
201
+
202
+ <!-- README_AWQ.md-use-from-python start -->
203
+ ## Inference from Python code using AutoAWQ
204
+
205
+ ### Install the AutoAWQ package
206
+
207
+ Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later.
208
+
209
+ ```shell
210
+ pip3 install autoawq
211
+ ```
212
+
213
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
214
+
215
+ ```shell
216
+ pip3 uninstall -y autoawq
217
+ git clone https://github.com/casper-hansen/AutoAWQ
218
+ cd AutoAWQ
219
+ pip3 install .
220
+ ```
221
+
222
+ ### AutoAWQ example code
223
+
224
+ ```python
225
+ from awq import AutoAWQForCausalLM
226
+ from transformers import AutoTokenizer
227
+
228
+ model_name_or_path = "TheBloke/Arithmo-Mistral-7B-AWQ"
229
+
230
+ # Load tokenizer
231
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
232
+ # Load model
233
+ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
234
+ trust_remote_code=False, safetensors=True)
235
+
236
+ prompt = "Tell me about AI"
237
+ prompt_template=f'''Question: {prompt}
238
+ Answer:
239
+ '''
240
+
241
+ print("*** Running model.generate:")
242
+
243
+ token_input = tokenizer(
244
+ prompt_template,
245
+ return_tensors='pt'
246
+ ).input_ids.cuda()
247
+
248
+ # Generate output
249
+ generation_output = model.generate(
250
+ token_input,
251
+ do_sample=True,
252
+ temperature=0.7,
253
+ top_p=0.95,
254
+ top_k=40,
255
+ max_new_tokens=512
256
+ )
257
+
258
+ # Get the tokens from the output, decode them, print them
259
+ token_output = generation_output[0]
260
+ text_output = tokenizer.decode(token_output)
261
+ print("LLM output: ", text_output)
262
+
263
+ """
264
+ # Inference should be possible with transformers pipeline as well in future
265
+ # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
266
+ from transformers import pipeline
267
+
268
+ print("*** Pipeline:")
269
+ pipe = pipeline(
270
+ "text-generation",
271
+ model=model,
272
+ tokenizer=tokenizer,
273
+ max_new_tokens=512,
274
+ do_sample=True,
275
+ temperature=0.7,
276
+ top_p=0.95,
277
+ top_k=40,
278
+ repetition_penalty=1.1
279
+ )
280
+
281
+ print(pipe(prompt_template)[0]['generated_text'])
282
+ """
283
+ ```
284
+ <!-- README_AWQ.md-use-from-python end -->
285
+
286
+ <!-- README_AWQ.md-compatibility start -->
287
+ ## Compatibility
288
+
289
+ The files provided are tested to work with:
290
+
291
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
292
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
293
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
294
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
295
+
296
+ <!-- README_AWQ.md-compatibility end -->
297
+
298
+ <!-- footer start -->
299
+ <!-- 200823 -->
300
+ ## Discord
301
+
302
+ For further support, and discussions on these models and AI in general, join us at:
303
+
304
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
305
+
306
+ ## Thanks, and how to contribute
307
+
308
+ Thanks to the [chirper.ai](https://chirper.ai) team!
309
+
310
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
311
+
312
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
313
+
314
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
315
+
316
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
317
+
318
+ * Patreon: https://patreon.com/TheBlokeAI
319
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
320
+
321
+ **Special thanks to**: Aemon Algiz.
322
+
323
+ **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
324
+
325
+
326
+ Thank you to all my generous patrons and donaters!
327
+
328
+ And thank you again to a16z for their generous grant.
329
+
330
+ <!-- footer end -->
331
+
332
+ # Original model card: Ashvini Kumar Jindal's Arithmo Mistral 7B
333
+
334
+ # Model Card for Model ID
335
+
336
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](CODE_LICENSE)
337
+ [![Model Weight License](https://img.shields.io/badge/Model%20Weights%20License-Apache_2.0-green.svg)](LICENSE)
338
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)
339
+
340
+ **P.S.:** Please reach out to [Ashvini Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/) if you would be interested in supporting compute need. We are looking for small-scale support so we'd appreciate any kind of help! :)
341
+
342
+ ## Model Details
343
+
344
+ Arithmo-Mistral-7B is trained to reason and answer mathematical problems and is also capable of writing a Python program that upon execution prints answer to the question. We used [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base model and used QLoRA to fine-tune it on a single RTX 4090 GPU.
345
+
346
+ ### Model Description
347
+
348
+ - **Project GitHub Page:** https://github.com/akjindal53244/Arithmo-Mistral-7B
349
+ - **Developed by:** [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/)
350
+ - **Funded by:** self-work
351
+ - **Model type:** fine-tuned
352
+ - **Language(s) (NLP):** English
353
+ - **Finetuned from model:** mistralai/Mistral-7B-v0.1
354
+
355
+ ## Results
356
+
357
+ Arithmo-Mistral-7B outperforms existing 7B and 13B state-of-the-art Mathematical Reasoning models. Refer to [Comparing Arithmo-Mistral-7B with other LLM models](https://github.com/akjindal53244/Arithmo-Mistral-7B/tree/master#comparing-arithmo-mistral-7b-with-other-llm-models) section for more details.
358
+
359
+ <table>
360
+ <thead>
361
+ <tr>
362
+ <th>Prompt Approach</th>
363
+ <th>GSM8k</th>
364
+ <th>MATH</th>
365
+ </tr>
366
+ </thead>
367
+ <tbody>
368
+ <tr>
369
+ <td>Zero-Shot CoT</td>
370
+ <td><b>74.7</b></td>
371
+ <td><b>25.3</b></td>
372
+ </tr>
373
+ <tr>
374
+ <td>Zero-Shot PoT</td>
375
+ <td><b>71.2</b></td>
376
+ <td>-</td>
377
+ </tr>
378
+ </tbody>
379
+ </table>
380
+
381
+ - **Zero-Shot CoT**: On providing a question as prompt, model generates reasoning steps to solve the question along with answer. We check if answer matches with ground-truth.
382
+ - **Zero-Shot PoT**: We prompt the model to generate a Python program for the given question. During inference, we execute the Python program generated by the model and check if the program output matches with ground-truth answer.
383
+
384
+
385
+ ## Installation
386
+
387
+ ```
388
+ pip install transformers >=4.34.0
389
+ pip install accelerate
390
+ pip install sentencepiece
391
+ pip install protobuf
392
+
393
+ # If you are GPU poor like me
394
+ pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
395
+
396
+ # If you have a GPU.
397
+ pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu118
398
+ pip install scipy
399
+ pip install bitsandbytes
400
+ ```
401
+
402
+
403
+ ## How to query the model
404
+
405
+ ```
406
+ # Set `run_model_on_gpu` to `False` if you are running on CPU. Model will generate reasoning steps with answer for your question. If you want to generate Python program, uncomment line-69 that adds a Python prompt.
407
+ # This script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc.**
408
+
409
+ $ python query_model.py
410
+ ```
411
+ **Note:** Above script automatically does formatting for you, so you just need to type question (eg: `What is 2+2?`) without any prefix like `Question:`, etc. Checkout [query_model.py](https://github.com/akjindal53244/Arithmo-Mistral-7B/blob/master/query_model.py) for more details. <br><br>
412
+
413
+ ##### Sample Input:
414
+ ```
415
+ Question: There are total 10 children. I have to give 1 apple to first child, 2 apples to second child, 3 apples to third child, and so on. How many apples do I need?
416
+ ```
417
+ ##### Model Output:
418
+ ```
419
+ Answer: The total number of apples needed is the sum of the first 10 positive integers.
420
+ This can be calculated using the formula for the sum of an arithmetic series:
421
+ \[S = \frac{n}{2}(a_1 + a_n),\]
422
+ where $S$ is the sum, $n$ is the number of terms, $a_1$ is the first term, and $a_n$ is the last term.
423
+ In this case, $n = 10$, $a_1 = 1$, and $a_n = 10$.
424
+ Plugging these values into the formula, we get:
425
+ \[S = \frac{10}{2}(1 + 10) = 5(11) = \boxed{55}.\]
426
+ The answer is: 55
427
+ ```
428
+
429
+ Arithmo-Mistral-7B is trained with the following format:
430
+ #### CoT Format (generate reasoning steps with answer):
431
+ ```
432
+ Question: <question>
433
+
434
+ Answer:
435
+ ```
436
+
437
+ #### PoT Format (generate a python program):
438
+ ```
439
+ Question: <question> <python_prompt>
440
+
441
+ Answer:
442
+ ```
443
+ It will perform best if queried in this way with your own script.
444
+
445
+ ## Comparing Arithmo-Mistral-7B with other LLM models.
446
+ Results for all models except `Arithmo-Mistral-7B` are taken from [MetaMath](https://github.com/meta-math/MetaMath/blob/main/README.MD) repository.
447
+
448
+ | Model | GSM8k Pass@1 | MATH Pass@1 |
449
+ |---------------------|--------------|-------------|
450
+ | MPT-7B | 6.8 | 3.0 |
451
+ | Falcon-7B | 6.8 | 2.3 |
452
+ | LLaMA-1-7B | 11.0 | 2.9 |
453
+ | LLaMA-2-7B | 14.6 | 2.5 |
454
+ | MPT-30B | 15.2 | 3.1 |
455
+ | LLaMA-1-13B | 17.8 | 3.9 |
456
+ | GPT-Neo-2.7B | 19.5 | -- |
457
+ | Falcon-40B | 19.6 | 2.5 |
458
+ | Baichuan-chat-13B | 23.9 | -- |
459
+ | Vicuna-v1.3-13B | 27.6 | -- |
460
+ | LLaMA-2-13B | 28.7 | 3.9 |
461
+ | InternLM-7B | 31.2 | -- |
462
+ | ChatGLM-2-6B | 32.4 | -- |
463
+ | GPT-J-6B | 34.9 | -- |
464
+ | LLaMA-1-33B | 35.6 | 3.9 |
465
+ | LLaMA-2-34B | 42.2 | 6.24 |
466
+ | RFT-7B | 50.3 | -- |
467
+ | LLaMA-1-65B | 50.9 | 10.6 |
468
+ | Qwen-7B | 51.6 | -- |
469
+ | WizardMath-7B | 54.9 | 10.7 |
470
+ | LLaMA-2-70B | 56.8 | 13.5 |
471
+ | WizardMath-13B | 63.9 | 14.0 |
472
+ | MetaMath-7B | 66.5 | 19.8 |
473
+ | MetaMath-13B | 72.3 | 22.4 |
474
+ | 🔥 **Arithmo-Mistral-7B Zero-Shot PoT** | **71.2** | -- |
475
+ | 🔥 **Arithmo-Mistral-7B Zero-Shot CoT** | **74.7** | **25.3** |
476
+ | WizardMath-70B | **81.6** | 22.7 |
477
+ | MetaMath-70B | **82.3** | **26.6** |
478
+
479
+
480
+ If you are interested in reproducing the resullts, visit https://github.com/akjindal53244/Arithmo-Mistral-7B#reproducing-results section.