Edit model card

llama3.1-8b-spaetzle-v74

llama3.1-8b-spaetzle-v74 is a merge of the following models:

EQ-Bench v2_de: 68.05 169/171, en: 75.27 - which is not the best, but it produces decent answers for some trick questions, and i have a sweet spot for that ;)

🧩 Configuration

models:
  - model: cstr/llama3.1-8b-spaetzle-v59
    parameters:
      weight: 0.3
      density: 0.5
  - model: cstr/llama3.1-8b-spaetzle-v63
    parameters:
      weight: 0.15
      density: 0.5
  - model: cstr/llama3.1-8b-spaetzle-v66
    parameters:
      weight: 0.15
      density: 0.5
  - model: cstr/llama3.1-8b-spaetzle-v73
    parameters:
      weight: 0.4
      density: 0.5
base_model: cstr/llama3.1-8b-spaetzle-v59
merge_method: della_linear
parameters:
  int8_mask: true
  normalize: true
  epsilon: 0.1  
  lambda: 1.0   
  density: 0.7
dtype: bfloat16

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "cstr/llama3.1-8b-spaetzle-v74"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
14
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cstr/llama3.1-8b-spaetzle-v74

Collection including cstr/llama3.1-8b-spaetzle-v74