Mahou-1.5-mistral-nemo-12B-lorablated

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using flammenai/Mahou-1.5-mistral-nemo-12B + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: false
slices:
- sources:
  - layer_range: [0, 40]
    model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
    parameters:
      weight: 1.0
Downloads last month
150
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated-GGUF

Quantized
(11)
this model