YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
BeagleLake-7B - GGUF
- Model creator: https://huggingface.co/fhai50032/
- Original model: https://huggingface.co/fhai50032/BeagleLake-7B/
Name | Quant method | Size |
---|---|---|
BeagleLake-7B.Q2_K.gguf | Q2_K | 2.53GB |
BeagleLake-7B.IQ3_XS.gguf | IQ3_XS | 2.81GB |
BeagleLake-7B.IQ3_S.gguf | IQ3_S | 2.96GB |
BeagleLake-7B.Q3_K_S.gguf | Q3_K_S | 2.95GB |
BeagleLake-7B.IQ3_M.gguf | IQ3_M | 3.06GB |
BeagleLake-7B.Q3_K.gguf | Q3_K | 3.28GB |
BeagleLake-7B.Q3_K_M.gguf | Q3_K_M | 3.28GB |
BeagleLake-7B.Q3_K_L.gguf | Q3_K_L | 3.56GB |
BeagleLake-7B.IQ4_XS.gguf | IQ4_XS | 3.67GB |
BeagleLake-7B.Q4_0.gguf | Q4_0 | 3.83GB |
BeagleLake-7B.IQ4_NL.gguf | IQ4_NL | 3.87GB |
BeagleLake-7B.Q4_K_S.gguf | Q4_K_S | 3.86GB |
BeagleLake-7B.Q4_K.gguf | Q4_K | 4.07GB |
BeagleLake-7B.Q4_K_M.gguf | Q4_K_M | 4.07GB |
BeagleLake-7B.Q4_1.gguf | Q4_1 | 4.24GB |
BeagleLake-7B.Q5_0.gguf | Q5_0 | 4.65GB |
BeagleLake-7B.Q5_K_S.gguf | Q5_K_S | 4.65GB |
BeagleLake-7B.Q5_K.gguf | Q5_K | 4.78GB |
BeagleLake-7B.Q5_K_M.gguf | Q5_K_M | 4.78GB |
BeagleLake-7B.Q5_1.gguf | Q5_1 | 5.07GB |
BeagleLake-7B.Q6_K.gguf | Q6_K | 5.53GB |
BeagleLake-7B.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
license: apache-2.0 tags: - merge - mergekit - mistral - fhai50032/RolePlayLake-7B - mlabonne/NeuralBeagle14-7B base_model: - fhai50032/RolePlayLake-7B - mlabonne/NeuralBeagle14-7B model-index: - name: BeagleLake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard
BeagleLake-7B
BeagleLake-7B is a merge of the following models :
Merging models are not powerful but are helpful in the case that it can work like Transfer Learning similar idk.. But they perform high on Leaderboard For ex. NeuralBeagle is powerful model with lot of potential to grow and RolePlayLake is Suitable for RP (No-Simping) and is significantly uncensored and nice obligations Fine-tuning a Merged model as a base model is surely a way to look forward and see a lot of potential going forward..
Much thanks to Charles Goddard for making simple interface 'mergekit'
𧩠Configuration
models:
- model: mlabonne/NeuralBeagle14-7B
# no params for base model
- model: fhai50032/RolePlayLake-7B
parameters:
weight: 0.8
density: 0.6
- model: mlabonne/NeuralBeagle14-7B
parameters:
weight: 0.3
density: [0.1,0.3,0.5,0.7,1]
merge_method: dare_ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fhai50032/BeagleLake-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 72.34 |
AI2 Reasoning Challenge (25-Shot) | 70.39 |
HellaSwag (10-Shot) | 87.38 |
MMLU (5-Shot) | 64.25 |
TruthfulQA (0-shot) | 64.92 |
Winogrande (5-shot) | 83.19 |
GSM8k (5-shot) | 63.91 |
- Downloads last month
- 12,066
Model size
7.24B params
Architecture
llama
Unable to determine this model's library. Check the
docs
.