Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Draco-8x7B - GGUF

Name Quant method Size
Draco-8x7B.Q2_K.gguf Q2_K 16.12GB
Draco-8x7B.IQ3_XS.gguf IQ3_XS 18.02GB
Draco-8x7B.IQ3_S.gguf IQ3_S 19.03GB
Draco-8x7B.Q3_K_S.gguf Q3_K_S 19.03GB
Draco-8x7B.IQ3_M.gguf IQ3_M 19.96GB
Draco-8x7B.Q3_K.gguf Q3_K 21.0GB
Draco-8x7B.Q3_K_M.gguf Q3_K_M 21.0GB
Draco-8x7B.Q3_K_L.gguf Q3_K_L 22.51GB
Draco-8x7B.IQ4_XS.gguf IQ4_XS 23.63GB
Draco-8x7B.Q4_0.gguf Q4_0 24.63GB
Draco-8x7B.IQ4_NL.gguf IQ4_NL 24.91GB
Draco-8x7B.Q4_K_S.gguf Q4_K_S 24.91GB
Draco-8x7B.Q4_K.gguf Q4_K 26.49GB
Draco-8x7B.Q4_K_M.gguf Q4_K_M 26.49GB
Draco-8x7B.Q4_1.gguf Q4_1 27.32GB
Draco-8x7B.Q5_0.gguf Q5_0 30.02GB
Draco-8x7B.Q5_K_S.gguf Q5_K_S 30.02GB
Draco-8x7B.Q5_K.gguf Q5_K 30.95GB
Draco-8x7B.Q5_K_M.gguf Q5_K_M 30.95GB
Draco-8x7B.Q5_1.gguf Q5_1 32.71GB
Draco-8x7B.Q6_K.gguf Q6_K 35.74GB
Draco-8x7B.Q8_0.gguf Q8_0 46.22GB

Original model description:

license: apache-2.0 tags: - moe - openchat - hermes - dolphin - bagel model-index: - name: Draco-8x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.65 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PulsarAI/Draco-8x7B name: Open LLM Leaderboard

image/jpeg

πŸ’« Draco-8x7B

This is the model for Draco-8x7B. I used this repo to make this MOE model.

This model's experts are not using any merged models.

πŸ“š Other branches (Number of Experts Per Token)

Other branches that this repository contains differ only slightly (from a git diff perspective) in terms of the number of experts per token.

Usually, a higher value for the number of experts per token will result in better performance, but it may also lead to increased inference time.

Number of experts per token Link of the branch
2 Main
3 3-experts-per-token
4 4-experts-per-token
6 6-experts-per-token
8 8-experts-per-token

πŸ’¬ Prompt Template(s):

This model includes many models, so providing only one prompt template is not enough. You can use and try these prompt templates and decide which works best for you.

Note: The current chat template in the tokenizer config is set to openchat-3.5-0106's chat template.

Note 2: It is also important to note that jondurbin/bagel-dpo-7b-v0.1 is using many prompt templates other than I provided. You can visit jondurbin/bagel-dpo-7b-v0.1 to learn more about this templates.

GPT4 Correct

Used in openchat/openchat-3.5-0106, beowolx/CodeNinja-1.0-OpenChat-7B

GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant: {asistant}<|end_of_turn|>

ChatML:

Used in teknium/OpenHermes-2.5-Mistral-7B, jondurbin/bagel-dpo-7b-v0.1, cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser, senseable/WestLake-7B-v2

<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>

Math Alpaca

Used in meta-math/MetaMath-Mistral-7B

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response: Let's think step by step.

πŸ› οΈ Yaml Config

See config
base_model: openchat/openchat-3.5-0106
gate_mode: hidden
dtype: bfloat16

experts:
  - source_model: openchat/openchat-3.5-0106
    positive_prompts: # General (Mistral finetune)
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"

  - source_model: teknium/OpenHermes-2.5-Mistral-7B
    positive_prompts: # General (Mistral finetune)
    - "interact"
    - "converse"
    - "respond"
    - "express"

  - source_model: jondurbin/bagel-dpo-7b-v0.1
    positive_prompts: # Science (Mistral finetune)
    - "science"
    - "biology"
    - "chemistry"
    - "physics"
    - "Newton's laws"
    - "scientific method"
    - "periodic table"
    - "photosynthesis process"

  - source_model: meta-math/MetaMath-Mistral-7B
    positive_prompts: # Math (Mistral finetune)
    - "reason"
    - "math"
    - "mathematics"
    - "solve"
    - "count"

  - source_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
    positive_prompts: # Uncensored (Mistral finetune)
    - "dolphin"
    - "uncensored"
    - "unbiased"
    - "unfiltered"
    - "unrestricted"
    - "offensive"

  - source_model: beowolx/CodeNinja-1.0-OpenChat-7B
    positive_prompts: # Code (openchat-3.5-1210 finetune)
    - "code"
    - "script"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"

  - source_model: senseable/WestLake-7B-v2
    positive_prompts: # Roleplay (Unknown finetune)
    - "storywriting"
    - "write"
    - "scene"
    - "story"
    - "character"
    - "act as"
    - "you are"

  - source_model: snorkelai/Snorkel-Mistral-PairRM-DPO
    positive_prompts: # Question Answering (? Mistral-7B-Instruct-v0.2 finetune ?)
    - "what happens"
    - "what is"
    - "what can"
    - "why"
    - "who"
    - "can a"

πŸ”„ Quantizationed versions

Quantizationed versions of this model is available thanks to TheBloke.

GPTQ
GGUF
AWQ

If you would like to support me:

β˜• Buy Me a Coffee

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.89
AI2 Reasoning Challenge (25-Shot) 65.02
HellaSwag (10-Shot) 85.24
MMLU (5-Shot) 64.96
TruthfulQA (0-shot) 62.65
Winogrande (5-shot) 80.66
GSM8k (5-shot) 66.79
Downloads last month
240
GGUF
Model size
46.7B params
Architecture
llama
+3
Unable to determine this model's library. Check the docs .