File size: 1,375 Bytes
34e7363
31b4e1e
 
34e7363
31b4e1e
 
 
 
 
34e7363
 
 
31b4e1e
34e7363
31b4e1e
34e7363
b35fab6
34e7363
 
5cdd4d8
31b4e1e
 
85478d1
4f4c3d8
f2a7ecb
a0ebbc7
5cdd4d8
 
 
 
 
 
04add3f
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model:
- InferenceIllusionist/Excalibur-7b
library_name: transformers
tags:
- finetune
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
---


# Excalibur-7b-DPO

<img src="https://i.imgur.com/pbPbqq0.jpeg" width="550"/>

An initial foray into the world of fine-tuning. The goal of this release was to amplify the quality of the original model's responses, in particular for vision use cases*


## Notes & Methodology
* [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
* This is a quick experiment to determine the impact of DPO finetuning on the original base model
* Ran for a little over an hour on a single A100
* Internal benchmarks showed improvement over base model, awaiting final results
* Precision: bfloat16


## Sample Question - Vision
<img src="https://i.imgur.com/7aRWtzU.jpeg" width="425"/>

<b>Requires additional [mistral-7b-mmproj-v1.5-Q4_1.gguf](https://huggingface.co/koboldcpp/mmproj/tree/main) file for vision functionality</b>

Select the gguf file of your choice in Kobold as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>

## Prompt Format
* For best results please use ChatML for the prompt format. Alpaca may also work.