kellemar-DPO-7B-d / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
9aba76e verified
|
raw
history blame
5.17 kB
---
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: kellemar-DPO-7B-d
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decruz07/kellemar-DPO-7B-d
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decruz07/kellemar-DPO-7B-d
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decruz07/kellemar-DPO-7B-d
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decruz07/kellemar-DPO-7B-d
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decruz07/kellemar-DPO-7B-d
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decruz07/kellemar-DPO-7B-d
name: Open LLM Leaderboard
---
# Model Card for decruz07/kellemar-DPO-7B-d
<!-- Provide a quick summary of what the model is/does. -->
This model was created using OpenHermes-2.5 as the base, and finetuned with argilla/distilabel-intel-orca-dpo-pairs.
## Model Details
Created with beta = 0.05
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** @decruz
- **Funded by [optional]:** my full-time job
- **Finetuned from model [optional]:** teknium/OpenHermes-2.5-Mistral-7B
## Uses
You can use this for basic inference. You could probably finetune with this if you want to.
## How to Get Started with the Model
You can create a space out of this, or use basic python code to call the model directly and make inferences to it.
[More Information Needed]
## Training Details
The following was used:
`training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
)`
### Training Data
This was trained with https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs
### Training Procedure
Trained with Labonne's Google Colab Notebook on Finetuning Mistral 7B with DPO.
## Model Card Authors [optional]
@decruz
## Model Card Contact
@decruz on X/Twitter
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_decruz07__kellemar-DPO-7B-d)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.84|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.16|
|MMLU (5-Shot) |62.77|
|TruthfulQA (0-shot) |56.88|
|Winogrande (5-shot) |79.32|
|GSM8k (5-shot) |62.02|