Edit model card

M7-8B-passthrough

M7-8B-passthrough is a merge of the following models using LazyMergekit:

🧩 Configuration

dtype: float16
merge_method: passthrough
slices:
  - sources:
    - model: liminerity/M7-7b
      layer_range: [0,9]
  - sources:
    - model: liminerity/M7-7b
      layer_range: [5,14]
  - sources:
    - model: liminerity/M7-7b
      layer_range: [10,19]
  - sources:
    - model: liminerity/M7-7b
      layer_range: [15,24]
  - sources:
    - model: liminerity/M7-7b
      layer_range: [20,32]

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "allknowingroger/M7-8B-passthrough"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
949
Safetensors
Model size
10.7B params
Tensor type
FP16
Β·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Merge of