File size: 5,837 Bytes
c3a18e8
546cfe8
3958f37
 
 
546cfe8
3958f37
546cfe8
3958f37
c3a18e8
 
 
 
3958f37
c3a18e8
 
546cfe8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115cadb
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
model-index:
- name: zephyr-math
  results: []
license: apache-2.0
datasets:
- rishiraj/guanaco-style-metamath
language:
- en
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
---

# Zephyr Math 7B Trained Using AutoTrain

## Model Details

[rishiraj/zephyr-math](https://huggingface.co/rishiraj/zephyr-math) is the LLM (released under [Apache License 2.0](http://www.apache.org/licenses/)) fully fine-tuned on the [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset and based on the powerful [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) model.

We try achieving State-Of-The-Art result in pass@1 on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math). The A100 GPU used for this fine-tuning process is generously provided by [Weights & Biases](https://wandb.ai/site). I am thankful to [Soumik Rakshit](https://wandb.ai/geekyrakshit) from team W&B for constant support in this integration. The experiment can be tracked using Weights & Biases [here](https://wandb.ai/ml-colabs/huggingface/runs/gamw5iuf).
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61030ed7d6edf00e0107a465/jzl7eBRE0F6YoqtekaSxJ.png)

### Preparing the dataset
AutoTrain Advanced expects your CSV custom dataset in a certain format to work properly. Your training file must contain a "text" column on which the training will be done. For best results, the "text" column should have data in the **### Human: Question?### Assistant: Answer.** format. A great example for the kind of dataset AutoTrain Advanced expects would be [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). However, if you observe the [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset, there are 3 columns - "query", "response" and "type". We will preprocess this dataset by removing the "type" column and combining the content of the "query" and "response" columns under one "text" column with the **### Human: Query?### Assistant: Response.** format. The resulting dataset is [rishiraj/guanaco-style-metamath](https://huggingface.co/datasets/rishiraj/guanaco-style-metamath) and it will be used for training.

### Adjusting hyperparameters
AutoTrain Advanced comes with a host hyperparameters we can tune to get the best model. While the default hyperparameters are a great start for everyone, I made a few changes there that are suitable for our use case. Here are the hyperparameters I used:
```
learning_rate = 2e-5
num_epochs = 3
batch_size = 4
block_size = 1024
trainer = "sft"
warmup_ratio = 0.03
weight_decay = 0.
gradient_accumulation = 4
use_fp16 = True
use_peft = True
use_int4 = True
merge_adapter = True
lora_r = 16
lora_alpha = 32
lora_dropout = 0.05
logging_steps = 10
log = "wandb"
```

### Results
Check out the [W&B Report]() for a detailed overview of the finetuned model including its Benchmark scores on a variety of tests like the ARC, HellaSwag, MMLU, TruthfulQA. I also included a comparison with other open-source LLMs on GSM8k Pass@1 and MATH Pass@1.

## Model Usage

Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:

```python
import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="rishiraj/zephyr-math", torch_dtype=torch.bfloat16, device_map="auto")

messages = [
    {
        "role": "system",
        "content": "You are a friendly chatbot who always responds in the style of a pirate",
    },
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

## Experiments

| Model               | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B              | 6.8          | 3.0         |
| Falcon-7B           | 6.8          | 2.3         |
| LLaMA-1-7B          | 11.0         | 2.9         |
| LLaMA-2-7B          | 14.6         | 2.5         |
| MPT-30B             | 15.2         | 3.1         |
| LLaMA-1-13B         | 17.8         | 3.9         |
| GPT-Neo-2.7B        | 19.5         | --          |
| Falcon-40B          | 19.6         | 2.5         |
| Baichuan-chat-13B   | 23.9         | --          |
| Vicuna-v1.3-13B     | 27.6         | --          |
| LLaMA-2-13B         | 28.7         | 3.9         |
| InternLM-7B         | 31.2         | --          |
| ChatGLM-2-6B        | 32.4         | --          |
| GPT-J-6B            | 34.9         | --          |
| LLaMA-1-33B         | 35.6         | 3.9         |
| LLaMA-2-34B         | 42.2         | 6.24        |
| RFT-7B              | 50.3         | --          |
| LLaMA-1-65B         | 50.9         | 10.6        |
| Qwen-7B             | 51.6         | --          |
| WizardMath-7B       | 54.9         | 10.7        |
| LLaMA-2-70B         | 56.8         | 13.5        |
| WizardMath-13B      | 63.9         | 14.0        |
| MAmmoTH-7B (COT)    | 50.5         | 10.4        |
| MAmmoTH-7B (POT+COT)| 53.6         | 31.5        |
| Arithmo-Mistral-7B  | 74.7         | 25.3        |
| MetaMath-7B         | 66.5         | 19.8        |
| MetaMath-13B        | 72.3         | 22.4        |
| 🔥 **Zephyr-Math-7B** | **??**     | **??**        |

## Citation

```bibtex
@software{acharya2023zephyrmath
  title = {Zephyr Math: Zephyr 7B Alpha Model Fine-tuned on MetaMathQA Dataset},
  author = {Rishiraj Acharya and Soumik Rakshit},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/rishiraj/zephyr-math}},
}
```