Edit model card

Check out the fine-tuned version: https://huggingface.co/NotAiLOL/Apollo-7b-orpo-Experimental

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using yam-peleg/Experiment26-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
    - model: MaziyarPanahi/Calme-7B-Instruct-v0.9
    - model: BarraHome/Mistroll-7B-v2.2
    - model: nbeerbower/bophades-mistral-truthy-DPO-7B
    - model: jondurbin/bagel-dpo-7b-v0.5
merge_method: model_stock
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16
Downloads last month
841
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Merge of

Collection including NotAiLOL/Apollo-7b-Experimental