Edit model card

Warning: This model is ranked first on the Open LLM Leaderboard (among the 7B models) (January 28th, 2024). However, note that this model was produced from many merges. I didn't fine-tune any of the models that I merged and I couldn't confirm that none of them have been trained on the evaluation benchmarks.

Model Card for Model ID

This is a mixture of experts created with mergekit and based on mistralai/Mistral-7B-v0.1.

Model Details

Model Description

Model Sources

Created with mergekit with this configuration:

models:
  - model: mncai/mistral-7b-dpo-v5
    # no parameters necessary for base model
  - model: FelixChao/WestSeverus-7B-DPO-v2
    parameters:
      density: 0.5
      weight: 0.3
  - model: BarryFutureman/NeuralTurdusVariant1-7B
    parameters:
      density: 0.5
      weight: 0.5
merge_method: ties
base_model: mncai/mistral-7b-dpo-v5
parameters:
  normalize: true
dtype: float16
Downloads last month
84
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kaitchup/Mayonnaise-4in1-022

Merges
3 models
Quantizations
1 model

Spaces using kaitchup/Mayonnaise-4in1-022 5