Text Generation
Transformers
Safetensors
llama
finetuned
quantized
4-bit precision
gptq
dataset:ai2_arc
dataset:unalignment/spicy-3.1
dataset:codeparrot/apps
dataset:facebook/belebele
dataset:boolq
dataset:jondurbin/cinematika-v0.1
dataset:drop
dataset:lmsys/lmsys-chat-1m
dataset:TIGER-Lab/MathInstruct
dataset:cais/mmlu
dataset:Muennighoff/natural-instructions
dataset:openbookqa
dataset:piqa
dataset:Vezora/Tested-22k-Python-Alpaca
dataset:cakiki/rosetta-code
dataset:Open-Orca/SlimOrca
dataset:spider
dataset:squad_v2
dataset:migtissera/Synthia-v1.3
dataset:datasets/winogrande
dataset:nvidia/HelpSteer
dataset:Intel/orca_dpo_pairs
dataset:unalignment/toxic-dpo-v0.1
dataset:jondurbin/truthy-dpo-v0.1
dataset:allenai/ultrafeedback_binarized_cleaned
dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned
dataset:LDJnr/Capybara
dataset:JULIELab/EmoBank
dataset:kingbri/PIPPA-shareGPT
Inference Endpoints
text-generation-inference
has_space
conversational
Eval Results
metadata
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- llama
- text-generation
- dataset:ai2_arc
- dataset:unalignment/spicy-3.1
- dataset:codeparrot/apps
- dataset:facebook/belebele
- dataset:boolq
- dataset:jondurbin/cinematika-v0.1
- dataset:drop
- dataset:lmsys/lmsys-chat-1m
- dataset:TIGER-Lab/MathInstruct
- dataset:cais/mmlu
- dataset:Muennighoff/natural-instructions
- dataset:openbookqa
- dataset:piqa
- dataset:Vezora/Tested-22k-Python-Alpaca
- dataset:cakiki/rosetta-code
- dataset:Open-Orca/SlimOrca
- dataset:spider
- dataset:squad_v2
- dataset:migtissera/Synthia-v1.3
- dataset:datasets/winogrande
- dataset:nvidia/HelpSteer
- dataset:Intel/orca_dpo_pairs
- dataset:unalignment/toxic-dpo-v0.1
- dataset:jondurbin/truthy-dpo-v0.1
- dataset:allenai/ultrafeedback_binarized_cleaned
- dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned
- dataset:LDJnr/Capybara
- dataset:JULIELab/EmoBank
- dataset:kingbri/PIPPA-shareGPT
- license:other
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- has_space
model_name: UNA-34Beagles-32K-bf16-v1-GPTQ
base_model: one-man-army/UNA-34Beagles-32K-bf16-v1
inference: false
model_creator: one-man-army
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
model-index:
- name: UNA-34Beagles-32K-bf16-v1-GPTQ
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 26.11
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 26.29
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.43
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 47.27
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.83
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
name: Open LLM Leaderboard
Description
MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ is a quantized (GPTQ) version of one-man-army/UNA-34Beagles-32K-bf16-v1
How to use
Install the necessary packages
pip install --upgrade accelerate auto-gptq transformers
Example Python code
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 29.15 |
AI2 Reasoning Challenge (25-Shot) | 26.11 |
HellaSwag (10-Shot) | 26.29 |
MMLU (5-Shot) | 24.43 |
TruthfulQA (0-shot) | 47.27 |
Winogrande (5-shot) | 50.83 |
GSM8k (5-shot) | 0.00 |