Text Generation
Transformers
Safetensors
phi
conversational
Eval Results
Inference Endpoints
text-generation-inference
File size: 5,697 Bytes
f064bbe
 
0b22f76
 
 
 
 
 
 
 
 
8f1055c
 
 
00e7c46
 
 
 
2619e5e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f064bbe
0b22f76
 
 
 
 
 
27a0225
 
0b22f76
c18e274
c8c068d
 
0b22f76
 
f408518
bd8dc8f
0b22f76
0144067
0b22f76
 
 
 
 
 
 
 
 
 
0144067
0b22f76
 
0144067
 
0b22f76
0144067
0b22f76
 
 
5292ab9
2619e5e
 
5292ab9
 
2619e5e
 
5292ab9
2619e5e
 
 
 
 
 
 
5292ab9
 
 
 
 
 
 
 
 
 
b8fa81b
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
---
license: mit
datasets:
- Open-Orca/SlimOrca-Dedup
- migtissera/Synthia-v1.3
- LDJnr/Verified-Camel
- LDJnr/Pure-Dove
- LDJnr/Capybara
- meta-math/MetaMathQA
- Intel/orca_dpo_pairs
- argilla/ultrafeedback-binarized-preferences-cleaned
widget:
  - example_title: "Example interaction"
    text: "Why is the sky blue?"
inference: 
  parameters:
    do_sample: True
    temperature: 0.1
model-index:
- name: phi-2-orange-v2
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 61.86
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 76.32
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 55.72
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 54.84
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 75.69
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 57.62
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2
      name: Open LLM Leaderboard
---
![Phi-2 Orange](https://huggingface.co/rhysjones/phi-2-orange-v2/resolve/main/phi-2-orange.jpg)

# Phi-2 Orange Version 2

A two-step finetune of Phi-2, with a bit more zest.

This is an improved version of the original [Phi-2-Orange](https://huggingface.co/rhysjones/phi-2-orange) that 
uses an updated training process on the same datasets.

It also uses the latest updated model from Microsoft's [Phi-2](https://huggingface.co/microsoft/phi-2), making it directly usable
within Hugging Face's Transformers library (without the need for trust remote code).

# Prompt Format

Phi-2 Orange v2 uses ChatML as the prompt format.  
(Update 12th March 2024: fixed eos_token issue)

It's recommended to always prompt with a system instruction (use whatever system prompt you like):

```
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant

```

For example, if you find the model's output to be overly verbose, instruct it to be short and concise:

```
<|im_start|>system
You are a helpful assistant. Be short and direct in your answers.<|im_end|>
<|im_start|>user
Was Tom Hanks in the movie Forrest Gump? If so, who did he play and give details of the plot.<|im_end|>
<|im_start|>assistant
```

# Evaluations


[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)  
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rhysjones__phi-2-orange-v2)
|             Metric              |Value|
|---------------------------------|----:|
|Average                          |63.67|
|AI2 Reasoning Challenge (25-Shot)|61.86|
|HellaSwag (10-Shot)              |76.32|
|MMLU (5-Shot)                    |55.72|
|TruthfulQA (0-shot)              |54.84|
|Winogrande (5-shot)              |75.69|
|GSM8k (5-shot)                   |57.62|

[YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)  
Evaluation from	[mlabonne](https://huggingface.co/mlabonne)'s alternative LLM leaderboard:
|             Metric              |Value|
|---------------------------------|----:|
|Average                          |49.64|
|AGIEval                          |34.55|
|GPT4All                          |70.96|
|TruthfulQA                       |54.87|
|Bigbench                         |38.17|

# Limitations

This model shares the same limitations as the underlying Phi-2 model, details of which are found [here](https://huggingface.co/microsoft/phi-2#limitations-of-phi-2).