oopere commited on
Commit
d7d01f4
1 Parent(s): 5ca6a68

This model has been created as a support for the book "Large Language Models Projects" by Apress, written by Pere Martra.

To create it, we started with the [Phi-3-Mini-4K-Instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) model and applied DPO alignment using the [distilabel-capybara-dpo-7k-binarized dataset](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized).

Phi-3 is a state-of-the-art model with 3.8 billion parameters that has outperformed other models with 7 billion parameters. The DPO alignment process has produced good results, modifying the model's responses and making them more similar to those in the capybara dataset.

**Row in the dataset: **
*{'prompt': '<|user|>\nAssist me in calculating 9319357631 plus 595. Numbers and symbols only, please.<|end|>\n<|assistant|>\n',
'chosen': 'The sum of 9319357631 and 595 is 9319358226.<|end|>\n',
'rejected': 'The result of adding 9319357631 and 595 is 9319363626.<|end|>\n'}*

**Prompt used to interrogate the models: **
*3713841893836/4?
Limit your response to mathematical expressions and symbols.*

**Base Model response:**
*To find the result of the division, we can simply divide the given number by 4:
$$
\frac{3713841893836}{4} = 928460473459*

Fine-Tuned Model response:
*3713841893836 ÷ 4 = 928460473459*


If you want to see how the model has been created, you can check out the repository where the book's notebooks are kept up-to-date.https://github.com/peremartra/Large-Language-Model-Notebooks-Course

Files changed (1) hide show
  1. README.md +10 -0
README.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - dpo
9
+ - phi-3
10
+ ---