QuantFactory Banner

QuantFactory/Stella-mistral-nemo-12B-v2-GGUF

This is quantized version of nbeerbower/Stella-mistral-nemo-12B-v2 created using llama.cpp

Original Model Card

Stella-mistral-nemo-12B-v2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using nbeerbower/mistral-nemo-bophades-12B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
    - model: nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
    - model: nbeerbower/mistral-nemo-gutenberg-12B-v4
    - model: nbeerbower/mistral-nemo-gutenberg-12B-v3
merge_method: model_stock
base_model: nbeerbower/mistral-nemo-bophades-12B
dtype: bfloat16

Downloads last month
23
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantFactory/Stella-mistral-nemo-12B-v2-GGUF