File size: 5,881 Bytes
5708ee7
 
 
 
b4fef0d
d39e6a6
5708ee7
 
 
fd4aa10
 
 
 
2ef64c9
1667b95
 
 
 
 
 
b248ed7
 
 
1667b95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5708ee7
17b11b1
5708ee7
17b11b1
5708ee7
866128e
 
d39e6a6
5708ee7
 
 
79e733d
 
 
 
1667b95
79e733d
 
 
 
 
 
 
 
fd4aa10
 
 
 
b248ed7
 
fd4aa10
 
 
 
 
 
 
 
2ef64c9
fd4aa10
5708ee7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd4aa10
5708ee7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17b11b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
tags:
- merge
- mergekit
- Maths
- Mistral
base_model:
- mlabonne/OmniBeagle-7B
- WizardLM/WizardMath-7B-V1.1
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
  - name: Pearl-7B-slerp
    results:
      - task:
          type: text-generation
        metrics:
          - name: Average 
            type: Average
            value: 72.75
          - name: ARC
            type: ARC
            value: 68.00
          - name: GSM8K
            type: GSM8K
            value: 73.62
          - name: Winogrande
            type: Winogrande
            value: 68.00
          - name: TruthfulQA
            type: TruthfulQA
            value: 62.35
          - name: HellaSwag
            type: HellaSwag
            value: 87.16
        source:
          name: Open LLM Leaderboard
          url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---
<center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>

# Pearl-7B-slerp, an xtraordinary 7B model for maths

**03-22-2024 - To date, louisbrulenaudet/Pearl-34B-ties is the "Best 🤝 base merges and moerges model of around 30B" on the Open LLM Leaderboard.**

Pearl-7B-slerp is a merge of the following models:
* [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)

### Evaluation

The evaluation was performed using the HuggingFace Open LLM Leaderboard.

| Model                                     | Average    | ARC   | HellaSwag | MMLU  | TruthfulQA | Winogrande | GSM8K | Params (B)  |
|-------------------------------------------|------------|-------|-----------|-------|------------|------------|-------|--------------|
| **louisbrulenaudet/Pearl-7B-slerp**       |**72.75**   | 68.00 | 87.16     | 64.04 | 62.35      | 81.29      |**73.62**| 7.24       |
| mistralai/Mixtral-8x7B-Instruct-v0.1      | 72.62      | 70.22 | 87.63     | 71.16 | 64.58      | 81.37      | 60.73 | 46.7         |
| microsoft/phi-2                           | 61.33      | 61.09 | 75.11     | 58.11 | 44.47      | 74.35      | 54.81 | 2.78         |
| microsoft/Orca-2-13b                      | 58.64      | 60.67 | 79.81     | 60.37 | 56.41      | 76.64      | 17.97 | 13           |
| mistralai/Mistral-7B-Instruct-v0.1        | 54.96      | 54.52 | 75.63     | 55.38 | 56.28      | 73.72      | 14.25 | 7.24         |
| meta-llama/Llama-2-7b-hf                  | 50.97      | 53.07 | 78.59     | 46.87 | 38.76      | 74.03      | 14.48 | 6.74         |

Spherical Linear Interpolation (SLERP) serves as a technique for seamlessly interpolating between two vectors while maintaining a constant rate of change and upholding the geometric properties of the spherical space in which these vectors exist.

Opting for SLERP over traditional linear interpolation is motivated by various considerations. Linear interpolation in high-dimensional spaces may result in a reduction in the magnitude of the interpolated vector, diminishing the scale of weights. Additionally, in many cases, the alteration in the weights' direction conveys more meaningful information, such as feature learning and representation, compared to the magnitude of change.

$$ {\displaystyle \operatorname {slerp} (p_{0},p_{1};t)={\frac {\sin {[(1-t)\Omega }]}{\sin \Omega }}p_{0}+{\frac {\sin[t\Omega ]}{\sin \Omega }}p_{1}.}$$ 

The implementation of SLERP involves the following steps:
- Normalize the input vectors to unit length, ensuring they signify directions rather than magnitudes.
- Calculate the angle between these vectors using their dot product.
- If the vectors are nearly collinear, the method defaults to linear interpolation for efficiency. Otherwise, SLERP calculates scale factors based on the interpolation factor t (where t=0 corresponds to 100% of the first vector, and t=1 corresponds to 100% of the second vector) and the angle between the vectors.
- Utilize these computed factors to weigh the original vectors, and then sum them to derive the interpolated vector.

In essence, SLERP provides a robust mechanism for interpolating vectors, offering advantages in preserving directional information and mitigating issues associated with linear interpolation in high-dimensional spaces.


## Configuration

```yaml
slices:
  - sources:
      - model: mlabonne/OmniBeagle-7B
        layer_range: [0, 32]
      - model: WizardLM/WizardMath-7B-V1.1
        layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/OmniBeagle-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
```

## Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "louisbrulenaudet/Pearl-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

## Citing & Authors

If you use this code in your research, please use the following BibTeX entry.

```BibTeX
@misc{louisbrulenaudet2023,
  author =       {Louis Brulé Naudet},
  title =        {Pearl-7B-slerp, an xtraordinary 7B model for maths},
  year =         {2023}
  howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp}},
}
```

## Feedback

If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).