Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

speechless-mistral-dolphin-orca-platypus-samantha-7b - GGUF

Name Quant method Size
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q2_K.gguf Q2_K 2.53GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.IQ3_XS.gguf IQ3_XS 2.81GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.IQ3_S.gguf IQ3_S 2.96GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_S.gguf Q3_K_S 2.95GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.IQ3_M.gguf IQ3_M 3.06GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K.gguf Q3_K 3.28GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_M.gguf Q3_K_M 3.28GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q3_K_L.gguf Q3_K_L 3.56GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.IQ4_XS.gguf IQ4_XS 3.67GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_0.gguf Q4_0 3.83GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.IQ4_NL.gguf IQ4_NL 3.87GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_S.gguf Q4_K_S 3.86GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K.gguf Q4_K 4.07GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_K_M.gguf Q4_K_M 4.07GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q4_1.gguf Q4_1 4.24GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_0.gguf Q5_0 4.65GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K_S.gguf Q5_K_S 4.65GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K.gguf Q5_K 4.78GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_K_M.gguf Q5_K_M 4.78GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q5_1.gguf Q5_1 5.07GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q6_K.gguf Q6_K 5.53GB
speechless-mistral-dolphin-orca-platypus-samantha-7b.Q8_0.gguf Q8_0 7.17GB

Original model description:

language: - en library_name: transformers pipeline_tag: text-generation datasets: - jondurbin/airoboros-2.2.1 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - ehartford/samantha-data tags: - llama-2 - code license: llama2 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 34.146 verified: false

speechless-mistral-dolphin-orca-platypus-samantha-7b

This model is a merge of ehartford/dolphin-2.1-mistral-7b, Open-Orca/Mistral-7B-OpenOrca, bhenrym14/mistral-7b-platypus-fp16 and ehartford/samantha-1.2-mistral-7b.

I'm very sorry for giving such a long and peculiar name. Originally, it was just my lazy behavior during the process of making models to easily distinguish various model and dataset combinations. I didn't expect the previous model (Thebloke GPTQ Version) to be so popular. This time, based on some guys's request, I am releasing a model based on Mistral, and I have also inherited the style of the super long name along with it. Welcome to try the model, please refrain from harsh criticism if you don't like it.

Code: https://github.com/uukuguy/speechless

HumanEval

Metric Value
humaneval-python 34.146

Big Code Models Leaderboard

CodeLlama-34B-Python: 53.29

CodeLlama-34B-Instruct: 50.79

CodeLlama-13B-Instruct: 50.6

CodeLlama-34B: 45.11

CodeLlama-13B-Python: 42.89

CodeLlama-13B: 35.07

Mistral-7B-v0.1: 30.488

LM-Evaluation-Harness

Open LLM Leaderboard

Metric Value
ARC 64.33
HellaSwag 84.4
MMLU 63.72
TruthfulQA 52.52
Winogrande 78.37
GSM8K 21.38
DROP 8.66
Average 53.34

Model Card for Mistral-7B-v0.1

The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.

For full details of this model please read our paper and release blog post.

Model Architecture

Mistral-7B-v0.1 is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Troubleshooting

  • If you see the following error: KeyError: 'mistral'
  • Or: NotImplementedError: Cannot copy out of meta tensor; no data!

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.

The Mistral AI Team

Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.`

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.34
ARC (25-shot) 64.33
HellaSwag (10-shot) 84.4
MMLU (5-shot) 63.72
TruthfulQA (0-shot) 52.52
Winogrande (5-shot) 78.37
GSM8K (5-shot) 21.38
DROP (3-shot) 8.66
Downloads last month
288
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .