Quantized models
Collection
Models helpfully quantized by others as well as myself
•
37 items
•
Updated
•
1
This is an experimental merge, replicating additional layers to the model without post-merge healing. There is damage to the model, but it appears to be tolerable as is. The resulting impact on narrative text completion may be of interest.
Light testing performed with instruct prompting and the following sampler settings:
Full weights: grimjim/llama-3-experiment-v1-9B
GGUF quants: grimjim/llama-3-experiment-v1-9B-GGUF
This is a merge of pre-trained language model meta-llama/Meta-Llama-3-8B-Instruct created using mergekit.
Built with Meta Llama 3.
This model was merged using the passthrough merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 12]
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16