Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be less than or equal to 8

L3-SthenoMaidBlackroot-8B-V1 - EXL2 8.05bpw rpcal mk2

This is a 8bpw EXL2 quant of bluuwhale/L3-SthenoMaidBlackroot-8B-V1

This quant was made using exllamav2-0.0.21 with Bluemoon-light dataset for RP.

I tested this quant shortly in some random RPs (including one over 8k context - with RoPE scaling as recommended in webui, maybe with alpha_value a bit higher) and it seems to work fine.

Prompt Templates

Seems to use llama3 prompt template.

Original readme below


model-out

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Sao10K/L3-8B-Stheno-v3.2 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: Sao10K/L3-8B-Stheno-v3.2
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
merge_method: model_stock
base_model: Sao10K/L3-8B-Stheno-v3.2
dtype: float16
Downloads last month
3

Merge of

Collection including DeusImperator/L3-SthenoMaidBlackroot-8B-V1_exl2_8.05bpw_rpcal_mk2