Edit model card

Gonzo-Chat-7B

Gonzo-Chat-7B is a merged LLM based on Mistral v0.01 with a 8192 Context length that likes to chat, roleplay, work with agents, do some lite programming, and then beat the brakes off you in the back alley...

The BEST Open Source 7B Street Fighting LLM of 2024!!!

SF-III.jpg

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.63
AI2 Reasoning Challenge (25-Shot) 65.02
HellaSwag (10-Shot) 85.40
MMLU (5-Shot) 63.75
TruthfulQA (0-shot) 60.23
Winogrande (5-shot) 77.74
GSM8k (5-shot) 47.61

LLM-Colosseum Results

All contestents fought using the same LLM-Colosseum default settings. Each contestant fought 25 rounds with every other contestant.

https://github.com/OpenGenerativeAI/llm-colosseum

Gonzo-Chat-7B .vs Mistral v0.2, Dolphon-Mistral v0.2, Deepseek-Coder-6.7b-instruct

games-won.png

download.png

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
    # No parameters necessary for base model
  - model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
    parameters:
      density: 0.53
      weight: 0.4
  - model:  NousResearch/Nous-Hermes-2-Mistral-7B-DPO
    parameters:
      density: 0.53
      weight: 0.3
  - model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft
    parameters:
      density: 0.53
      weight: 0.3
merge_method: dare_ties
base_model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
parameters:
  int8_mask: true
dtype: bfloat16
Downloads last month
2,836
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Merge of

Evaluation results