--- base_model: - Sao10K/L3-8B-Stheno-v3.2 - Hastagaras/Jamet-8B-L3-MK.V-Blackroot - NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS library_name: transformers tags: - mergekit - merge --- # L3-SthenoMaidBlackroot-8B-V1 - EXL2 8.05bpw rpcal mk2 This is a 8bpw EXL2 quant of [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1) This quant was made using exllamav2-0.0.21 with [Bluemoon-light dataset](https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light) for RP. I tested this quant shortly in some random RPs (including one over 8k context - with RoPE scaling as recommended in webui, maybe with alpha_value a bit higher) and it seems to work fine. ## Prompt Templates Seems to use llama3 prompt template. ### Original readme below --- # model-out This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) as a base. ### Models Merged The following models were included in the merge: * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot) * [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Sao10K/L3-8B-Stheno-v3.2 - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot merge_method: model_stock base_model: Sao10K/L3-8B-Stheno-v3.2 dtype: float16 ```