Edit model card

rpbase

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using TheBloke/Llama-2-13B-fp16 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
- model: Norquinal/OpenCAI-13B
  parameters:
    density: 0.5
    weight: 0.2
- model: PygmalionAI/mythalion-13b
  parameters:
    density: 0.5
    weight: 0.3
- model: microsoft/Orca-2-13b
  parameters:
    density: 0.5
    weight: 0.1
- model: ChaiML/phase2_winner_13b2
  parameters:
    density: 0.5
    weight: 0.4
- model: jondurbin/airoboros-l2-13b-2.2.1
  parameters:
    density: 0.5
    weight: 0.4
- model: NousResearch/Nous-Hermes-Llama2-13b
  parameters:
    density: 0.5
    weight: 0.4
base_model: TheBloke/Llama-2-13B-fp16
merge_method: dare_ties
parameters:
  normalize: 1.0
Downloads last month
4
Safetensors
Model size
13B params
Tensor type
FP16
·

Merge of