--- license: apache-2.0 --- # Polyglot-math-4x7b-24b ![polyglot](polyglot.png) Polyglot-4x7b is a Mixture of Experts approach to a multilingual model. This project is an experiment to see if each expert can be of a different language. The answer is yes. The model is a merge of models that are capable of Chinese and Japanese output. + meta-math/MetaMath-Mistral-7B + oshizo/japanese-e5-mistral-7b_slerp + cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser + s3nh/Mistral-7B-Evol-Instruct-Chinese # Code Example Inference [Colab](https://colab.research.google.com/drive/1tYSb63IKZDsiQ5BIJU8Oc92phxugAmB3?usp=sharing) ```python from transformers import AutoModelForCausalLM, AutoTokenizer def generate_response(prompt): """ Generate a response from the model based on the input prompt. Args: prompt (str): Prompt for the model. Returns: str: The generated response from the model. """ # Tokenize the input prompt inputs = tokenizer(prompt, return_tensors="pt") # Generate output tokens outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id) # Decode the generated tokens to a string response = tokenizer.decode(outputs[0], skip_special_tokens=True) return response # Load the model and tokenizer model_id = "macadeliccc/polyglot-math-4x7b" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) # Math prompts in different languages english_math_prompt = "Explain the proof of Fermat's Last Theorem and its implications in number theory." chinese_math_prompt = "解释费马大定理的证明及其在数论中的意义。" japanese_math_prompt = "フェルマーの最終定理の証明と数論におけるその意義について説明してください。" # Generate and print responses for each math prompt print("English Math Response:") print(generate_response(english_math_prompt), "\n") print("Chinese Math Response:") print(generate_response(chinese_math_prompt), "\n") print("Japanese Math Response:") print(generate_response(japanese_math_prompt), "\n") ``` ## Example Output English: Write a quicksort algorithm in python. ```python def quicksort(arr): if len(arr) <= 1: return arr else: pivot = arr[0] less = [i for i in arr[1:] if i <= pivot] greater = [i for i in arr[1:] if i > pivot] return quicksort(less) + [pivot] + quicksort(greater) arr = [5, 2, 9, 1, 5, 7, 4, 8, 6, 3] print(quicksort(arr)) ``` This is a simple implementation of the quicksort algorithm in python. The function `quicksort` takes an array as input and returns a sorted array. The algorithm works by selecting a pivot element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The process is then repeated recursively on the sub-arrays until the entire array is sorted. Chinese Response: 用Python写一个快速排序算法 ```python def quick_sort(arr): if len(arr) <= 1: return arr else: pivot = arr[0] less = [i for i in arr[1:] if i <= pivot] greater = [i for i in arr[1:] if i > pivot] return quick_sort(less) + [pivot] + quick_sort(greater) arr = [3, 5, 2, 1, 4, 6, 8, 7] print(quick_sort(arr)) ``` 这个程序的时间复杂度为O(nlogn),空间复杂度为O(n)。 Japanese Response: Pythonでクイックソートアルゴリズムを書いてください。 ```python def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[0] left = [x for x in arr[1:] if x < pivot] right = [x for x in arr[1:] if x >= pivot] return quicksort(left) + [pivot] + quicksort(right) print(quicksort([3,6,8,10,1,5,9,2,4,7])) ``` このコードはクイックソートアルゴリズムを実装しています。クイックソートは一種の分割と conquers アルゴリズムで、配列を分割し、それぞれの部分配列を再帰的にソートします。 この実装では、配列の最初の要素をピボットとして使用します。そして、配列を2つの # Evaluations | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------|-------|------|-----:|--------|-----:|---|-----:| |arc_challenge|Yaml |none | 0|acc |0.5495|± |0.0145| | | |none | 0|acc_norm|0.5794|± |0.0144| |arc_easy |Yaml |none | 0|acc |0.8304|± |0.0077| | | |none | 0|acc_norm|0.8068|± |0.0081| |boolq |Yaml |none | 0|acc |0.8749|± |0.0058| |hellaswag |Yaml |none | 0|acc |0.6276|± |0.0048| | | |none | 0|acc_norm|0.8157|± |0.0039| |openbookqa |Yaml |none | 0|acc |0.3180|± |0.0208| | | |none | 0|acc_norm|0.4460|± |0.0223| |piqa |Yaml |none | 0|acc |0.8139|± |0.0091| | | |none | 0|acc_norm|0.8237|± |0.0089| |winogrande |Yaml |none | 0|acc |0.7419|± |0.0123|