Text Generation
Transformers
Safetensors
phi-msft
custom_code
Inference Endpoints
File size: 2,971 Bytes
c27463a
 
20bbca2
 
 
 
 
 
 
 
 
c27463a
8361512
b182c14
 
 
8361512
b182c14
66af1ab
 
 
 
 
b182c14
 
 
 
 
 
 
 
 
 
 
 
 
b77cc20
 
 
 
 
 
 
 
0adc64e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb8b52a
 
 
 
 
fa70f50
fb8b52a
fa70f50
fb8b52a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
datasets:
- Open-Orca/SlimOrca-Dedup
- migtissera/Synthia-v1.3
- LDJnr/Verified-Camel
- LDJnr/Pure-Dove
- LDJnr/Capybara
- meta-math/MetaMathQA
- Intel/orca_dpo_pairs
- argilla/ultrafeedback-binarized-preferences-cleaned
---
![Phi-2 Orange](https://huggingface.co/rhysjones/phi-2-orange/resolve/main/phi-2-orange.jpg)

# Phi-2 Orange

A two-step finetune of Phi-2, with a bit of zest.

There is an updated model at [rhysjones/phi-2-orange-v2](https://huggingface.co/rhysjones/phi-2-orange-v2) which has higher evals, if you wish to test.

# Training details

A first finetune using a collection of broad training data:

- [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- [migtissera/Synthia-v1.3](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- [LDJnr/Verified-Camel](https://huggingface.co/datasets/LDJnr/Verified-Camel)
- [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove)
- [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)

And then a DPO finetune using:

- [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)


# Run within Ollama

If you're using [Ollama](https://ollama.ai), you can download and run using:
```
ollama run rhysjones/phi-2-orange
```

# Prompt Format

Phi-2 Orange uses ChatML as the prompt format, with or without the system instruction.

To prompt with a system instruction (use whatever system prompt you like):

```
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant

```

You can also omit the system prompt if you wish:

```
<|im_start|>user
Why is the sky blue?<|im_end|>
<|im_start|>assistant

```

# Evaluations
Evaluations done using mlabonne's usefull [Colab notebook llm-autoeval](https://github.com/mlabonne/llm-autoeval).
Also check out the alternative leaderboard at [Yet_Another_LLM_Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)
|                             Model                              |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)|  **33.37**|  71.33|      49.87|   **37.3**|  **47.97**|
|[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)|  30.39|  **71.68**|     **50.75**|    34.9|  46.93|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)|  33.12|  69.85|     47.39|    37.2|  46.89|
|[phi-2](https://huggingface.co/microsoft/phi-2)|  27.98|   70.8|     44.43|   35.21|  44.61|