Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,12 @@ datasets:
|
|
10 |
Phi-2-ORPO is a finetuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
|
11 |
preference dataset using **Odds Ratio Preference Optimization (ORPO)**.
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
|
16 |
Some highlights of this techniques are:
|
@@ -21,7 +26,26 @@ Some highlights of this techniques are:
|
|
21 |
📊 Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
|
22 |
|
23 |
|
24 |
-
####
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
|
27 |
## Evaluation
|
|
|
10 |
Phi-2-ORPO is a finetuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
|
11 |
preference dataset using **Odds Ratio Preference Optimization (ORPO)**.
|
12 |
|
13 |
+
## LazyORPO
|
14 |
+
|
15 |
+
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
|
16 |
+
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)
|
17 |
+
|
18 |
+
#### What is ORPO?
|
19 |
|
20 |
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
|
21 |
Some highlights of this techniques are:
|
|
|
26 |
📊 Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
|
27 |
|
28 |
|
29 |
+
#### Usage
|
30 |
+
|
31 |
+
```python
|
32 |
+
import torch
|
33 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
34 |
+
|
35 |
+
torch.set_default_device("cuda")
|
36 |
+
|
37 |
+
model = AutoModelForCausalLM.from_pretrained("abideen/phi2-pro", torch_dtype="auto", trust_remote_code=True)
|
38 |
+
tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=True)
|
39 |
+
|
40 |
+
inputs = tokenizer('''def print_prime(n):
|
41 |
+
"""
|
42 |
+
Print all primes between 1 and n
|
43 |
+
"""''', return_tensors="pt", return_attention_mask=False)
|
44 |
+
|
45 |
+
outputs = model.generate(**inputs, max_length=200)
|
46 |
+
text = tokenizer.batch_decode(outputs)[0]
|
47 |
+
print(text)
|
48 |
+
```
|
49 |
|
50 |
|
51 |
## Evaluation
|