Quantized models
Collection
Select models helpfully quantized by others as well as myself
•
59 items
•
Updated
•
2
This repo contains an exl2 quant of Llama-3-Luminurse-v0.2-OAS-8B at 8bpw. For suggested sampler settings, refer to the model card of the original repo.
This model is a merge of pre-trained language models created using mergekit.
Luminurse is a merge based on Lumimaid, enhanced with a biomedical model, with a dash of TheSpice thrown in to improve formatting of text generation.
Built with Meta Llama 3.
This model was merged using the task arithmetic merge method using NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
slices:
- sources:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
layer_range: [0,32]
- model: grimjim/llama-3-aaditya-OpenBioLLM-8B
layer_range: [0,32]
parameters:
weight: 0.2
- model: cgato/L3-TheSpice-8b-v0.8.3
layer_range: [0,32]
parameters:
weight: 0.04
merge_method: task_arithmetic
dtype: bfloat16
Base model
grimjim/Llama-3-Luminurse-v0.2-OAS-8B