QuantFactory/Stella-mistral-nemo-12B-v2-GGUF
This is quantized version of nbeerbower/Stella-mistral-nemo-12B-v2 created using llama.cpp
Original Model Card
Stella-mistral-nemo-12B-v2
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using nbeerbower/mistral-nemo-bophades-12B as a base.
Models Merged
The following models were included in the merge:
- nbeerbower/mistral-nemo-gutenberg-12B-v3
- nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
- nbeerbower/mistral-nemo-gutenberg-12B-v4
Configuration
The following YAML configuration was used to produce this model:
models:
- model: nbeerbower/Lyra-Gutenberg-mistral-nemo-12B
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
- model: nbeerbower/mistral-nemo-gutenberg-12B-v3
merge_method: model_stock
base_model: nbeerbower/mistral-nemo-bophades-12B
dtype: bfloat16
- Downloads last month
- 23
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support