--- language: - en license: apache-2.0 library_name: transformers tags: - moe - merge - abideen/NexoNimbus-7B - mlabonne/NeuralMarcoro14-7B model-index: - name: NexoNimbus-MoE-2x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.81 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/NexoNimbus-MoE-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.66 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/NexoNimbus-MoE-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/NexoNimbus-MoE-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.06 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/NexoNimbus-MoE-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/NexoNimbus-MoE-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 53.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abideen/NexoNimbus-MoE-2x7B name: Open LLM Leaderboard --- # NexoNimbus-MoE-2x7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/_bzC6xkVIHW0tSigBxUI3.png) NexoNimbus-MoE-2x7B is a Mixure of Experts (MoE) made with the following models: * [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B) * [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B) 🏆 Evaluation NexoNimbus-MoE-2x7B is the 10th best-performing 13B LLM on the Open LLM Leaderboard: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/z8E728H5fJqVtKNeGuwjX.png) | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |62.28|± | 1.41| | | |acc_norm|66.80|± | 1.37| |hellaswag | 0|acc |66.83|± | 0.46| | | |acc_norm|85.66|± | 0.34| |gsm8k | 0|acc |53.52|± | 1.37| |winogrande | 0|acc |81.53|± | 1.09| |mmlu | 0|acc |64.51|± | 1.00| Average: 67.51% ### TruthfulQA | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |35.98|± | 1.68| | | |mc2 |53.05|± | 1.53| ## 🧩 Configuration ```yaml base_model: teknium/OpenHermes-2.5-Mistral-7B gate_mode: hidden dtype: bfloat16 experts: - source_model: abideen/NexoNimbus-7B positive_prompts: - "Mathematics" - "Physics" - "Chemistry" - "Biology" - "Medicine" - "Engineering" - "Computer Science" negative_prompts: - "History" - "Philosophy" - "Linguistics" - "Literature" - "Art and Art History" - "Music Theory and Composition" - "Performing Arts (Theater, Dance)" - source_model: mlabonne/NeuralMarcoro14-7B positive_prompts: - "Earth Sciences (Geology, Meteorology, Oceanography)" - "Environmental Science" - "Astronomy and Space Science" - "Psychology" - "Sociology" - "Anthropology" - "Political Science" - "Economics" negative_prompts: - "Education" - "Law" - "Theology and Religious Studies" - "Communication Studies" - "Business and Management" - "Agricultural Sciences" - "Nutrition and Food Science" - "Sports Science" ``` ## 💻 Usage Here's a [Colab notebook](https://colab.research.google.com/drive/1B1Q7vO95cDkEJbKIPhOWr6exB9-Q_lr-?usp=sharing) to run NexoNimbus-MoE-2x7B in 4-bit precision on a free T4 GPU. ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "abideen/NexoNimbus-MoE-2x7B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what is data science."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` "Data science is an interdisciplinary field that combines mathematics, statistics, computer science, and domain expertise in order to extract meaningful insights and knowledge from structured and unstructured data. It involves the process of collecting, cleaning, transforming, analyzing, and visualizing data in order to identify patterns, trends, and relationships that can inform decision-making and drive business strategies. Data scientists use various tools and techniques, such as machine learning, deep learning, and natural language processing, to develop predictive models, optimize processes, and automate decision-making. The field of data science is rapidly evolving as more and more data is generated and the demand for data-driven insights continues to grow." # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abideen__NexoNimbus-MoE-2x7B) | Metric |Value| |---------------------------------|----:| |Avg. |67.51| |AI2 Reasoning Challenge (25-Shot)|66.81| |HellaSwag (10-Shot) |85.66| |MMLU (5-Shot) |64.51| |TruthfulQA (0-shot) |53.06| |Winogrande (5-shot) |81.53| |GSM8k (5-shot) |53.53|