Edit model card

Beagle14-7B GGUF

Original model: Beagle14-7B Model creator: Maxime Labonne

This repo contains GGUF format model files for Maxime Labonne’s Beagle14-7B.

Beagle14-7B is a merge of the following models using LazyMergekit:

What is GGUF?

GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp build 1879 (revision 3e5ca79)

Prompt template: Zephyr

Zephyr-style appears to work well!

<|system|>
{{system_message}}</s>
<|user|>
{{prompt}}</s>
<|assistant|>

Download & run with cnvrs on iPhone, iPad, and Mac!

cnvrs.ai

cnvrs is the best app for private, local AI on your device:

  • create & save Characters with custom system prompts & temperature settings
  • download and experiment with any GGUF model you can find on HuggingFace!
  • make it your own with custom Theme colors
  • powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
  • try it out yourself today, on Testflight!
  • follow cnvrs on twitter to stay up to date

Original Model Evaluations:

The evaluation was performed by the model’s creator using LLM AutoEval on Nous suite, as reported from mlabonne’s alternative leaderboard, YALL: Yet Another LLM Leaderboard.

Model AGIEval GPT4All TruthfulQA Bigbench Average
Beagle14-7B 44.38 76.53 69.44 47.25 59.4
OpenHermes-2.5-Mistral-7B 42.75 72.99 52.99 40.94 52.42
NeuralHermes-2.5-Mistral-7B 43.67 73.24 55.37 41.76 53.51
Nous-Hermes-2-SOLAR-10.7B 47.79 74.69 55.92 44.84 55.81
Marcoro14-7B-slerp 44.66 76.24 64.15 45.64 57.67
CatMarcoro14-7B-slerp 45.21 75.91 63.81 47.31 58.06
Downloads last month
144
GGUF
Model size
7.24B params
Architecture
llama
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from