File size: 5,272 Bytes
c871cdc
 
 
e6fe36b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
license: apache-2.0
---
# Polyglot-math-4x7b-24b

![polyglot](polyglot.png)

Polyglot-4x7b is a Mixture of Experts approach to a multilingual model.

This project is an experiment to see if each expert can be of a different language. The answer is yes.

The model is a merge of models that are capable of Chinese and Japanese output.

+ meta-math/MetaMath-Mistral-7B
+ oshizo/japanese-e5-mistral-7b_slerp
+ cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
+ s3nh/Mistral-7B-Evol-Instruct-Chinese


# Code Example 

Inference [Colab](https://colab.research.google.com/drive/1tYSb63IKZDsiQ5BIJU8Oc92phxugAmB3?usp=sharing)

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

def generate_response(prompt):
    """
    Generate a response from the model based on the input prompt.

    Args:
    prompt (str): Prompt for the model.

    Returns:
    str: The generated response from the model.
    """
    # Tokenize the input prompt
    inputs = tokenizer(prompt, return_tensors="pt")

    # Generate output tokens
    outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)

    # Decode the generated tokens to a string
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    return response

# Load the model and tokenizer
model_id = "macadeliccc/polyglot-math-4x7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)

# Math prompts in different languages
english_math_prompt = "Explain the proof of Fermat's Last Theorem and its implications in number theory."
chinese_math_prompt = "解释费马大定理的证明及其在数论中的意义。"
japanese_math_prompt = "フェルマーの最終定理の証明と数論におけるその意義について説明してください。"

# Generate and print responses for each math prompt
print("English Math Response:")
print(generate_response(english_math_prompt), "\n")

print("Chinese Math Response:")
print(generate_response(chinese_math_prompt), "\n")

print("Japanese Math Response:")
print(generate_response(japanese_math_prompt), "\n")

```

## Example Output
English: 

  Write a quicksort algorithm in python.
  
  ```python
  def quicksort(arr):
      if len(arr) <= 1:
          return arr
      else:
          pivot = arr[0]
          less = [i for i in arr[1:] if i <= pivot]
          greater = [i for i in arr[1:] if i > pivot]
          return quicksort(less) + [pivot] + quicksort(greater)
  
  arr = [5, 2, 9, 1, 5, 7, 4, 8, 6, 3]
  print(quicksort(arr))
  ```
  
  This is a simple implementation of the quicksort algorithm in python. The function `quicksort` takes an array as input and returns a sorted array. The algorithm works by selecting a pivot element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The process is then repeated recursively on the sub-arrays until the entire array is sorted. 

Chinese Response:

  用Python写一个快速排序算法
  
  ```python
  def quick_sort(arr):
      if len(arr) <= 1:
          return arr
      else:
          pivot = arr[0]
          less = [i for i in arr[1:] if i <= pivot]
          greater = [i for i in arr[1:] if i > pivot]
          return quick_sort(less) + [pivot] + quick_sort(greater)
  
  arr = [3, 5, 2, 1, 4, 6, 8, 7]
  print(quick_sort(arr))
  ```
  这个程序的时间复杂度为O(nlogn),空间复杂度为O(n)。 

Japanese Response: 

  Pythonでクイックソートアルゴリズムを書いてください。

  ```python
  def quicksort(arr):
      if len(arr) <= 1:
          return arr
      pivot = arr[0]
      left = [x for x in arr[1:] if x < pivot]
      right = [x for x in arr[1:] if x >= pivot]
      return quicksort(left) + [pivot] + quicksort(right)
  
  print(quicksort([3,6,8,10,1,5,9,2,4,7]))
  ```
  
  このコードはクイックソートアルゴリズムを実装しています。クイックソートは一種の分割と conquers アルゴリズムで、配列を分割し、それぞれの部分配列を再帰的にソートします。
  
  この実装では、配列の最初の要素をピボットとして使用します。そして、配列を2つの 



# Evaluations

|    Tasks    |Version|Filter|n-shot| Metric |Value |   |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml   |none  |     0|acc     |0.5495|±  |0.0145|
|             |       |none  |     0|acc_norm|0.5794|±  |0.0144|
|arc_easy     |Yaml   |none  |     0|acc     |0.8304|±  |0.0077|
|             |       |none  |     0|acc_norm|0.8068|±  |0.0081|
|boolq        |Yaml   |none  |     0|acc     |0.8749|±  |0.0058|
|hellaswag    |Yaml   |none  |     0|acc     |0.6276|±  |0.0048|
|             |       |none  |     0|acc_norm|0.8157|±  |0.0039|
|openbookqa   |Yaml   |none  |     0|acc     |0.3180|±  |0.0208|
|             |       |none  |     0|acc_norm|0.4460|±  |0.0223|
|piqa         |Yaml   |none  |     0|acc     |0.8139|±  |0.0091|
|             |       |none  |     0|acc_norm|0.8237|±  |0.0089|
|winogrande   |Yaml   |none  |     0|acc     |0.7419|±  |0.0123|