VerA-Etheria-55b / README.md
Steelskull's picture
Update README.md
f6b5923 verified
metadata
tags:
  - merge
  - mergekit
  - Etheria
base_model:
  - brucethemoose/Yi-34B-200K-DARE-megamerge-v8
license: apache-2.0

VerA-Etheria-55b

image/png

An attempt to make a functional goliath style merge with One yi-34b-200k model Merged to make a [Etheria] 55b-200k Model, this is Version A or VerA, it is a single Model Passthrough merge.

Roadmap:

Depending on quality, I Might private the other Version. Then generate a sacrificial 55b and perform a 55b Dare ties merge or Slerp merge.

1: If the Dual Model Merge performs well I will make a direct inverse of the config then merge.

2: If the single model performs well I will generate a 55b of the most performant model then either Slerp or Dare ties merge.

3: If both models perform well, then I will complete both 1 & 2 then change the naming scheme to match each of the new models.

🧩 Configuration

dtype: bfloat16
slices:
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [0, 14]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [7, 21]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [15, 29]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [22, 36]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [30, 44]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [37, 51]
- sources:
    - model: brucethemoose/Yi-34B-200K-DARE-megamerge-v8
      layer_range: [45, 59]
merge_method: passthrough

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "steelskull/VA-Etheria-55b"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])