File size: 7,883 Bytes
40d7255
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


MetaMath-Mistral-2x7B - GGUF
- Model creator: https://huggingface.co/harshitv804/
- Original model: https://huggingface.co/harshitv804/MetaMath-Mistral-2x7B/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MetaMath-Mistral-2x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [MetaMath-Mistral-2x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [MetaMath-Mistral-2x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [MetaMath-Mistral-2x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [MetaMath-Mistral-2x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [MetaMath-Mistral-2x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [MetaMath-Mistral-2x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [MetaMath-Mistral-2x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [MetaMath-Mistral-2x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [MetaMath-Mistral-2x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [MetaMath-Mistral-2x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [MetaMath-Mistral-2x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [MetaMath-Mistral-2x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [MetaMath-Mistral-2x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [MetaMath-Mistral-2x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [MetaMath-Mistral-2x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [MetaMath-Mistral-2x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [MetaMath-Mistral-2x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [MetaMath-Mistral-2x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [MetaMath-Mistral-2x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [MetaMath-Mistral-2x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [MetaMath-Mistral-2x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/harshitv804_-_MetaMath-Mistral-2x7B-gguf/blob/main/MetaMath-Mistral-2x7B.Q8_0.gguf) | Q8_0 | 7.17GB |




Original model description:
---
base_model:
- meta-math/MetaMath-Mistral-7B
tags:
- mergekit
- merge
- meta-math/MetaMath-Mistral-7B
- Mixture of Experts
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63060761cb5492c9859b64ea/BfR-Giwmh_3R-ymdeiI5k.png)

This is MetaMath-Mistral-2x7B Mixture of Experts (MOE) model created using [mergekit](https://github.com/cg123/mergekit) for experimental and learning purpose of MOE.

## Merge Details
### Merge Method

This model was merged using the SLERP merge method using [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) as the base model.

### Models Merged

The following models were included in the merge:
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) x 2

### Configuration

The following YAML configuration was used to produce this model:

```yaml

slices:
  - sources:
      - model: meta-math/MetaMath-Mistral-7B
        layer_range: [0, 32]
      - model: meta-math/MetaMath-Mistral-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: meta-math/MetaMath-Mistral-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

```

## Inference Code
```python

## install dependencies
## !pip install -q -U git+https://github.com/huggingface/transformers.git
## !pip install -q -U git+https://github.com/huggingface/accelerate.git
## !pip install -q -U sentencepiece

## load model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model_name = "harshitv804/MetaMath-Mistral-2x7B"

# load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
)

tokenizer = AutoTokenizer.from_pretrained(
    model_name, 
    trust_remote_code=True
)

tokenizer.pad_token = tokenizer.eos_token

## inference

query = "Maximoff's monthly bill is $60 per month. His monthly bill increased by thirty percent when he started working at home. How much is his total monthly bill working from home?"
prompt =f"""
Below is an instruction that describes a task. Write a response that appropriately completes the request.\n
### Instruction:\n
{query}\n
### Response: Let's think step by step.
"""

# tokenize the input string
inputs = tokenizer(
    prompt, 
    return_tensors="pt", 
    return_attention_mask=False
)

# generate text using the model
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
outputs = model.generate(**inputs, max_length=2048, streamer=streamer)

# decode and print the output
text = tokenizer.batch_decode(outputs)[0]

```

## Citation

```bibtex
@article{yu2023metamath,
  title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
  author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
  journal={arXiv preprint arXiv:2309.12284},
  year={2023}
}
```

```bibtex
@article{jiang2023mistral,
  title={Mistral 7B},
  author={Jiang, Albert Q and Sablayrolles, Alexandre and Mensch, Arthur and Bamford, Chris and Chaplot, Devendra Singh and Casas, Diego de las and Bressand, Florian and Lengyel, Gianna and Lample, Guillaume and Saulnier, Lucile and others},
  journal={arXiv preprint arXiv:2310.06825},
  year={2023}
}
```