Update README.md
Browse files
README.md
CHANGED
@@ -19,10 +19,67 @@ widget:
|
|
19 |
content: Can you provide ways to eat combinations of bananas and dragonfruits?
|
20 |
---
|
21 |
|
22 |
-
# RachidAR/Phi-3-mini-4k-
|
23 |
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
24 |
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
## Use with llama.cpp
|
27 |
Install llama.cpp through brew (works on Mac and Linux)
|
28 |
|
@@ -34,12 +91,12 @@ Invoke the llama.cpp server or the CLI.
|
|
34 |
|
35 |
### CLI:
|
36 |
```bash
|
37 |
-
llama-cli --hf-repo RachidAR/Phi-3-mini-4k-instruct-
|
38 |
```
|
39 |
|
40 |
### Server:
|
41 |
```bash
|
42 |
-
llama-server --hf-repo RachidAR/Phi-3-mini-4k-instruct-
|
43 |
```
|
44 |
|
45 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -56,9 +113,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
56 |
|
57 |
Step 3: Run inference through the main binary.
|
58 |
```
|
59 |
-
./llama-cli --hf-repo RachidAR/Phi-3-mini-4k-instruct-
|
60 |
```
|
61 |
or
|
62 |
```
|
63 |
-
./llama-server --hf-repo RachidAR/Phi-3-mini-4k-instruct-
|
64 |
```
|
|
|
19 |
content: Can you provide ways to eat combinations of bananas and dragonfruits?
|
20 |
---
|
21 |
|
22 |
+
# RachidAR/Phi-3-mini-4k-ins-June2024-Q5_K_M-imat-GGUF (June 2024 Update)
|
23 |
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
24 |
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
|
25 |
|
26 |
+
## Release Notes
|
27 |
+
|
28 |
+
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
|
29 |
+
The model used additional post-training data leading to substantial gains on instruction following and structure output.
|
30 |
+
We also **improve multi-turn conversation quality**, **explicitly support <|system|> tag**, and **significantly improve reasoning capability**.
|
31 |
+
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
|
32 |
+
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
|
33 |
+
|
34 |
+
The table below highlights improvements on instruction following, structure output, and reasoning of the new release on publich and internal benchmark datasets.
|
35 |
+
|
36 |
+
| Benchmarks | Original | June 2024 Update |
|
37 |
+
|:------------|:----------|:------------------|
|
38 |
+
| Instruction Extra Hard | 5.7 | 6.0 |
|
39 |
+
| Instruction Hard | 4.9 | 5.1 |
|
40 |
+
| Instructions Challenge | 24.6 | 42.3 |
|
41 |
+
| JSON Structure Output | 11.5 | 52.3 |
|
42 |
+
| XML Structure Output | 14.4 | 49.8 |
|
43 |
+
| GPQA | 23.7 | 30.6 |
|
44 |
+
| MMLU | 68.8 | 70.9 |
|
45 |
+
| **Average** | **21.9** | **36.7** |
|
46 |
+
|
47 |
+
|
48 |
+
### Chat Format
|
49 |
+
|
50 |
+
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
|
51 |
+
You can provide the prompt as a question with a generic template as follow:
|
52 |
+
```markdown
|
53 |
+
<|system|>
|
54 |
+
You are a helpful assistant.<|end|>
|
55 |
+
<|user|>
|
56 |
+
Question?<|end|>
|
57 |
+
<|assistant|>
|
58 |
+
```
|
59 |
+
|
60 |
+
For example:
|
61 |
+
```markdown
|
62 |
+
<|system|>
|
63 |
+
You are a helpful assistant.<|end|>
|
64 |
+
<|user|>
|
65 |
+
How to explain Internet for a medieval knight?<|end|>
|
66 |
+
<|assistant|>
|
67 |
+
```
|
68 |
+
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
|
69 |
+
|
70 |
+
```markdown
|
71 |
+
<|system|>
|
72 |
+
You are a helpful travel assistant.<|end|>
|
73 |
+
<|user|>
|
74 |
+
I am going to Paris, what should I see?<|end|>
|
75 |
+
<|assistant|>
|
76 |
+
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
|
77 |
+
<|user|>
|
78 |
+
What is so great about #1?<|end|>
|
79 |
+
<|assistant|>
|
80 |
+
```
|
81 |
+
|
82 |
+
|
83 |
## Use with llama.cpp
|
84 |
Install llama.cpp through brew (works on Mac and Linux)
|
85 |
|
|
|
91 |
|
92 |
### CLI:
|
93 |
```bash
|
94 |
+
llama-cli --hf-repo RachidAR/Phi-3-mini-4k-instruct-Q6_K-GGUF --hf-file phi-3-mini-4k-instruct-q6_k.gguf -p "The meaning to life and the universe is"
|
95 |
```
|
96 |
|
97 |
### Server:
|
98 |
```bash
|
99 |
+
llama-server --hf-repo RachidAR/Phi-3-mini-4k-instruct-Q6_K-GGUF --hf-file phi-3-mini-4k-instruct-q6_k.gguf -c 2048
|
100 |
```
|
101 |
|
102 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
113 |
|
114 |
Step 3: Run inference through the main binary.
|
115 |
```
|
116 |
+
./llama-cli --hf-repo RachidAR/Phi-3-mini-4k-instruct-Q6_K-GGUF --hf-file phi-3-mini-4k-instruct-q6_k.gguf -p "The meaning to life and the universe is"
|
117 |
```
|
118 |
or
|
119 |
```
|
120 |
+
./llama-server --hf-repo RachidAR/Phi-3-mini-4k-instruct-Q6_K-GGUF --hf-file phi-3-mini-4k-instruct-q6_k.gguf -c 2048
|
121 |
```
|