InferenceIllusionist commited on
Commit
4f4c3d8
1 Parent(s): 85478d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -21,6 +21,7 @@ An initial foray into the world of fine-tuning. The goal of this release was to
21
  * [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
22
  * This is a quick experiment to determine the impact of DPO finetuning on the original base model
23
  * Ran for a little over an hour on a single A100
 
24
 
25
 
26
 
 
21
  * [Excalibur-7b](https://huggingface.co/InferenceIllusionist/Excalibur-7b) fine-tuned with Direct Preference Optimization (DPO) using Intel/orca_dpo_pairs
22
  * This is a quick experiment to determine the impact of DPO finetuning on the original base model
23
  * Ran for a little over an hour on a single A100
24
+ * Internal benchmarks showed improvement over base model, awaiting final results
25
 
26
 
27