L3-SthenoMaidBlackroot-15B

L3-SthenoMaidBlackroot-15B is a self merge of the following model using LazyMergekit:

🧩 Configuration

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- sources:
  - layer_range: [8, 24]
    model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8, 24]
    model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [24, 32]
    model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Tremontaine/L3-SthenoMaidBlackroot-15B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
10
Safetensors
Model size
15B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Tremontaine/L3-SthenoMaidBlackroot-15B

Finetuned
(1)
this model
Quantizations
3 models