Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: microsoft/Phi-3-mini-4k-instruct
|
3 |
+
inference: false
|
4 |
+
license: mit
|
5 |
+
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
tags:
|
10 |
+
- nlp
|
11 |
+
- code
|
12 |
+
model_creator: microsoft
|
13 |
+
model_name: Phi-3-mini-4k-instruct
|
14 |
+
model_type: phi3
|
15 |
+
quantized_by: brittlewis12
|
16 |
+
---
|
17 |
+
|
18 |
+
# Phi 3 Mini 4K Instruct GGUF
|
19 |
+
|
20 |
+
**Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
|
21 |
+
|
22 |
+
**Model creator**: [Microsoft](https://huggingface.co/microsoft)
|
23 |
+
|
24 |
+
This repo contains GGUF format model files for Microsoft’s Phi 3 Mini 4K Instruct.
|
25 |
+
|
26 |
+
> The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
|
27 |
+
|
28 |
+
Learn more on Microsoft’s [Model page](https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/).
|
29 |
+
|
30 |
+
### What is GGUF?
|
31 |
+
|
32 |
+
GGUF is a file format for representing AI models. It is the third version of the format,
|
33 |
+
introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
34 |
+
Converted with llama.cpp build 2721 (revision [28103f4](https://github.com/ggerganov/llama.cpp/commit/28103f4832e301a9c84d44ff0df9d75d46ab6c76)),
|
35 |
+
using [autogguf](https://github.com/brittlewis12/autogguf).
|
36 |
+
|
37 |
+
### Prompt template
|
38 |
+
|
39 |
+
```
|
40 |
+
<|system|>
|
41 |
+
{{system_prompt}}<|end|>
|
42 |
+
<|user|>
|
43 |
+
{{prompt}}<|end|>
|
44 |
+
<|assistant|>
|
45 |
+
|
46 |
+
```
|
47 |
+
|
48 |
+
---
|
49 |
+
|
50 |
+
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
|
51 |
+
|
52 |
+

|
53 |
+
|
54 |
+
[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
|
55 |
+
- create & save **Characters** with custom system prompts & temperature settings
|
56 |
+
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
|
57 |
+
- make it your own with custom **Theme colors**
|
58 |
+
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
|
59 |
+
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
|
60 |
+
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
|
61 |
+
|
62 |
+
---
|
63 |
+
|
64 |
+
## Original Model Evaluation
|
65 |
+
|
66 |
+
> As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
|
67 |
+
> The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
|
68 |
+
> More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
|
69 |
+
>
|
70 |
+
> The number of k–shot examples is listed per-benchmark.
|
71 |
+
|
72 |
+
| | Phi-3-Mini-4K-In<br>3.8b | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|
73 |
+
|---|---|---|---|---|---|---|---|
|
74 |
+
| MMLU <br>5-Shot | 68.8 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
|
75 |
+
| HellaSwag <br> 5-Shot | 76.7 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
|
76 |
+
| ANLI <br> 7-Shot | 52.8 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
|
77 |
+
| GSM-8K <br> 0-Shot; CoT | 82.5 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
|
78 |
+
| MedQA <br> 2-Shot | 53.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
|
79 |
+
| AGIEval <br> 0-Shot | 37.5 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
|
80 |
+
| TriviaQA <br> 5-Shot | 64.0 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
|
81 |
+
| Arc-C <br> 10-Shot | 84.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
|
82 |
+
| Arc-E <br> 10-Shot | 94.6 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
|
83 |
+
| PIQA <br> 5-Shot | 84.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
|
84 |
+
| SociQA <br> 5-Shot | 76.6 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
|
85 |
+
| BigBench-Hard <br> 0-Shot | 71.7 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
|
86 |
+
| WinoGrande <br> 5-Shot | 70.8 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
|
87 |
+
| OpenBookQA <br> 10-Shot | 83.2 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
|
88 |
+
| BoolQ <br> 0-Shot | 77.6 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
|
89 |
+
| CommonSenseQA <br> 10-Shot | 80.2 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
|
90 |
+
| TruthfulQA <br> 10-Shot | 65.0 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
|
91 |
+
| HumanEval <br> 0-Shot | 59.1 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
|
92 |
+
| MBPP <br> 3-Shot | 53.8 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
|