Quantization made by Richard Erkhov.
NeuralTrix-7B-dpo - GGUF
- Model creator: https://huggingface.co/CultriX/
- Original model: https://huggingface.co/CultriX/NeuralTrix-7B-dpo/
Name | Quant method | Size |
---|---|---|
NeuralTrix-7B-dpo.Q2_K.gguf | Q2_K | 2.53GB |
NeuralTrix-7B-dpo.IQ3_XS.gguf | IQ3_XS | 2.81GB |
NeuralTrix-7B-dpo.IQ3_S.gguf | IQ3_S | 2.96GB |
NeuralTrix-7B-dpo.Q3_K_S.gguf | Q3_K_S | 2.95GB |
NeuralTrix-7B-dpo.IQ3_M.gguf | IQ3_M | 3.06GB |
NeuralTrix-7B-dpo.Q3_K.gguf | Q3_K | 3.28GB |
NeuralTrix-7B-dpo.Q3_K_M.gguf | Q3_K_M | 3.28GB |
NeuralTrix-7B-dpo.Q3_K_L.gguf | Q3_K_L | 3.56GB |
NeuralTrix-7B-dpo.IQ4_XS.gguf | IQ4_XS | 3.67GB |
NeuralTrix-7B-dpo.Q4_0.gguf | Q4_0 | 3.83GB |
NeuralTrix-7B-dpo.IQ4_NL.gguf | IQ4_NL | 3.87GB |
NeuralTrix-7B-dpo.Q4_K_S.gguf | Q4_K_S | 3.86GB |
NeuralTrix-7B-dpo.Q4_K.gguf | Q4_K | 4.07GB |
NeuralTrix-7B-dpo.Q4_K_M.gguf | Q4_K_M | 4.07GB |
NeuralTrix-7B-dpo.Q4_1.gguf | Q4_1 | 4.24GB |
NeuralTrix-7B-dpo.Q5_0.gguf | Q5_0 | 4.65GB |
NeuralTrix-7B-dpo.Q5_K_S.gguf | Q5_K_S | 4.65GB |
NeuralTrix-7B-dpo.Q5_K.gguf | Q5_K | 4.78GB |
NeuralTrix-7B-dpo.Q5_K_M.gguf | Q5_K_M | 4.78GB |
NeuralTrix-7B-dpo.Q5_1.gguf | Q5_1 | 5.07GB |
NeuralTrix-7B-dpo.Q6_K.gguf | Q6_K | 5.53GB |
NeuralTrix-7B-dpo.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
tags: - merge - mergekit - lazymergekit - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus base_model: - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus license: apache-2.0
Edit: Please see This Thread
NeuralTrix-7B-v1
NeuralTrix-7B-v1 is a merge of the following models using LazyMergekit:
It was then trained with DPO using:
🧩 Configuration
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: mlabonne/OmniBeagle-7B
parameters:
density: 0.65
weight: 0.4
- model: flemmingmiguel/MBX-7B-v3
parameters:
density: 0.6
weight: 0.35
- model: AiMavenAi/AiMaven-Prometheus
parameters:
density: 0.6
weight: 0.35
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
💻 Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/NeuralTrix-7B-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])