Lumosia-v2-MoE-4x10.7

image/png

The Lumosia Series upgraded with Lumosia V2.

What's New in Lumosia V2?

Lumosia V2 takes the original vision of being an "all-rounder" and refines it with more nuanced capabilities.

Topic/Prompt Based Approach:

Diverging from the keyword-based approach of its counterpart, Umbra.

Context and Coherence:

With a base context of 8k scrolling window and the ability to maintain coherence up to 16k.

Balanced and Versatile:

The core ethos of Lumosia V2 is balance. It's designed to be your go-to assistant.

Experimentation and User-Centric Development:

Lumosia V2 remains an experimental model, a mosaic of the best-performing Solar models, (selected based on user experience). This version is a testament to the idea that innovation is a journey, not a destination.

Template:

### System:

### USER:{prompt}

### Assistant:

Settings:

Temp: 1.0
min-p: 0.02-0.1

Evals:

  • Avg:
  • ARC:
  • HellaSwag:
  • MMLU:
  • T-QA:
  • Winogrande:
  • GSM8K:

Examples:

Example 1:

User:

Lumosia:
Example 2:

User:

Lumosia:

🧩 Configuration

yaml
base_model: DopeorNope/SOLARC-M-10.7B
gate_mode: hidden
dtype: bfloat16

experts:
  - source_model: DopeorNope/SOLARC-M-10.7B
    positive_prompts:
    
    negative_prompts:

  - source_model: Sao10K/Fimbulvetr-10.7B-v1 [Updated]
    positive_prompts:
    
    negative_prompts:

  - source_model: jeonsworld/CarbonVillain-en-10.7B-v4 [Updated]
    positive_prompts:
    
    negative_prompts:

  - source_model: kyujinpy/Sakura-SOLAR-Instruct
    positive_prompts:
    
    negative_prompts:

💻 Usage

python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Steelskull/Lumosia-v2-MoE-4x10.7"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.75
AI2 Reasoning Challenge (25-Shot) 70.39
HellaSwag (10-Shot) 87.87
MMLU (5-Shot) 66.45
TruthfulQA (0-shot) 68.48
Winogrande (5-shot) 84.21
GSM8k (5-shot) 65.13
Downloads last month
25
Safetensors
Model size
36.1B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for SteelStorage/Lumosia-v2-MoE-4x10.7

Quantizations
2 models

Evaluation results