Model Name: Llama 3 orca_mini_v6_8b_dpo
Llama 3 orca_mini_v6_8b_dpo is trained with various DPO Datasets
Passionate about Generative AI? I help companies to privately train and deploy custom LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!https://www.linkedin.com/in/pankajam Looking forward to connecting!
NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model. Dive in and innovate!
Evaluation
Coming Soon..
Example Usage
Here is the ChatML prompt format
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
Below shows a code example on how to use this model
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v6_8b_dpo"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
This model is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Quants
GGUF : Coming Soon
AWQ: Coming Soon
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.29 |
IFEval (0-Shot) | 38.83 |
BBH (3-Shot) | 32.48 |
MATH Lvl 5 (4-Shot) | 5.51 |
GPQA (0-shot) | 6.82 |
MuSR (0-shot) | 9.26 |
MMLU-PRO (5-shot) | 28.85 |
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard38.830
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard32.480
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard5.510
- acc_norm on GPQA (0-shot)Open LLM Leaderboard6.820
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.260
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.850