Edit model card

Excalibur-7b-DPO-GGUF

An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of the original model's responses, in particular for vision use cases*

FP16 available here

Notes & Methodology

  • Excalibur-7b fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
  • This is a quick experiment to determine the impact of DPO finetuning on the original base model
  • Ran for a little over an hour on a single A100
  • Internal benchmarks showed improvement over base model, awaiting final results
  • Precision: bfloat16

Sample Question - Vision

*Requires additional mmproj file. You have two options for vision functionality (available inside original repo or linked below):

Select the gguf file of your choice in Kobold as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:

Prompt Format

  • For best results please use ChatML for the prompt format. Alpaca may also work.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.84
AI2 Reasoning Challenge (25-Shot) 70.90
HellaSwag (10-Shot) 87.93
MMLU (5-Shot) 65.46
TruthfulQA (0-shot) 70.82
Winogrande (5-shot) 82.48
GSM8k (5-shot) 65.43
Downloads last month
153
GGUF
Unable to determine this model’s pipeline type. Check the docs .

Quantized from

Dataset used to train InferenceIllusionist/Excalibur-7b-DPO-GGUF

Collection including InferenceIllusionist/Excalibur-7b-DPO-GGUF