QuantFactory/SOVL-Mega-Mash-V2-L3-8B-GGUF

This is quantized version of saishf/SOVL-Mega-Mash-V2-L3-8B created using llama.cpp

Model Description

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using saishf/Neural-SOVLish-Devil-8B-L3 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: saishf/Neural-SOVLish-Devil-8B-L3
  - model: saishf/Merge-Mayhem-L3-V2
  - model: saishf/Merge-Mayhem-L3-V2.1
  - model: saishf/SOVLish-Maid-L3-8B
merge_method: model_stock
base_model: saishf/Neural-SOVLish-Devil-8B-L3
dtype: bfloat16
Downloads last month
33
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for QuantFactory/SOVL-Mega-Mash-V2-L3-8B-GGUF

Quantized
(5)
this model