Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp - GGUF

Name Quant method Size
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q2_K.gguf Q2_K 3.73GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.IQ3_XS.gguf IQ3_XS 4.14GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.IQ3_S.gguf IQ3_S 4.37GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q3_K_S.gguf Q3_K_S 4.34GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.IQ3_M.gguf IQ3_M 4.51GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q3_K.gguf Q3_K 4.84GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q3_K_M.gguf Q3_K_M 4.84GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q3_K_L.gguf Q3_K_L 5.26GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.IQ4_XS.gguf IQ4_XS 5.43GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q4_0.gguf Q4_0 5.66GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.IQ4_NL.gguf IQ4_NL 5.72GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q4_K_S.gguf Q4_K_S 5.7GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q4_K.gguf Q4_K 6.02GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q4_K_M.gguf Q4_K_M 6.02GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q4_1.gguf Q4_1 6.27GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q5_0.gguf Q5_0 6.89GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q5_K_S.gguf Q5_K_S 6.89GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q5_K.gguf Q5_K 7.08GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q5_K_M.gguf Q5_K_M 7.08GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q5_1.gguf Q5_1 7.51GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q6_K.gguf Q6_K 8.2GB
Sakura-SOLAR-Instruct-CarbonVillain-en-10.7B-v2-slerp.Q8_0.gguf Q8_0 10.62GB

Original model description:

license: apache-2.0 tags: - merge - mergekit - lazymergekit - jeonsworld/CarbonVillain-en-10.7B-v2 - kyujinpy/Sakura-SOLAR-Instruct

NeuralPipe-7B-slerp

NeuralPipe-7B-slerp is a merge of the following models using LazyMergekit:

🧩 Configuration

slices:
  - sources:
      - model: jeonsworld/CarbonVillain-en-10.7B-v2
        layer_range: [0, 48]
      - model: kyujinpy/Sakura-SOLAR-Instruct
        layer_range: [0, 48]
merge_method: slerp
base_model: jeonsworld/CarbonVillain-en-10.7B-v2
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors

tokenizer_source: union

dtype: float16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "invalid-coder/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
23,788
GGUF
Model size
10.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Unable to determine this model's library. Check the docs .