YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
NeuralPipe-7B-slerp - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/NeuralPipe-7B-slerp/
Name | Quant method | Size |
---|---|---|
NeuralPipe-7B-slerp.Q2_K.gguf | Q2_K | 2.53GB |
NeuralPipe-7B-slerp.IQ3_XS.gguf | IQ3_XS | 2.81GB |
NeuralPipe-7B-slerp.IQ3_S.gguf | IQ3_S | 2.96GB |
NeuralPipe-7B-slerp.Q3_K_S.gguf | Q3_K_S | 2.95GB |
NeuralPipe-7B-slerp.IQ3_M.gguf | IQ3_M | 3.06GB |
NeuralPipe-7B-slerp.Q3_K.gguf | Q3_K | 3.28GB |
NeuralPipe-7B-slerp.Q3_K_M.gguf | Q3_K_M | 3.28GB |
NeuralPipe-7B-slerp.Q3_K_L.gguf | Q3_K_L | 3.56GB |
NeuralPipe-7B-slerp.IQ4_XS.gguf | IQ4_XS | 3.67GB |
NeuralPipe-7B-slerp.Q4_0.gguf | Q4_0 | 3.83GB |
NeuralPipe-7B-slerp.IQ4_NL.gguf | IQ4_NL | 3.87GB |
NeuralPipe-7B-slerp.Q4_K_S.gguf | Q4_K_S | 3.86GB |
NeuralPipe-7B-slerp.Q4_K.gguf | Q4_K | 4.07GB |
NeuralPipe-7B-slerp.Q4_K_M.gguf | Q4_K_M | 4.07GB |
NeuralPipe-7B-slerp.Q4_1.gguf | Q4_1 | 4.24GB |
NeuralPipe-7B-slerp.Q5_0.gguf | Q5_0 | 4.65GB |
NeuralPipe-7B-slerp.Q5_K_S.gguf | Q5_K_S | 4.65GB |
NeuralPipe-7B-slerp.Q5_K.gguf | Q5_K | 4.78GB |
NeuralPipe-7B-slerp.Q5_K_M.gguf | Q5_K_M | 4.78GB |
NeuralPipe-7B-slerp.Q5_1.gguf | Q5_1 | 5.07GB |
NeuralPipe-7B-slerp.Q6_K.gguf | Q6_K | 5.53GB |
NeuralPipe-7B-slerp.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
license: apache-2.0 tags: - merge - mergekit model-index: - name: NeuralPipe-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.8 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralPipe-7B-slerp name: Open LLM Leaderboard
NeuralPipe-7B
This model is a merge of the following models made with mergekit:
β‘ Quantized models
Thanks to TheBloke for the quantized models:
- GGUF: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-GGUF
- AWQ: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-AWQ
- GPTQ: https://huggingface.co/TheBloke/NeuralPipe-7B-slerp-GPTQ
𧩠Configuration
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Output:
A large language model is an AI system that uses deep learning techniques to process and understand vast amounts of natural language data. It is designed to generate human-like text, perform complex language tasks, and understand the context, nuance, and meaning of textual data. These models are trained on large datasets, often including billions of words, to learn the patterns and relationships in language. As a result, they can generate coherent and contextually relevant text, answer questions, and perform a variety of other language-related tasks. Some well-known large language models include OpenAI's GPT-3, Google's BERT, and Facebook's RoBERTa.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 71.17 |
AI2 Reasoning Challenge (25-Shot) | 67.75 |
HellaSwag (10-Shot) | 86.15 |
MMLU (5-Shot) | 63.94 |
TruthfulQA (0-shot) | 59.80 |
Winogrande (5-shot) | 79.64 |
GSM8k (5-shot) | 69.75 |
- Downloads last month
- 171
Unable to determine this model's library. Check the
docs
.