Update README.md
#3
by
YorkieOH10
- opened
README.md
CHANGED
@@ -24,6 +24,8 @@ The model belongs to the Phi-3 family with the Medium version in two variants [4
|
|
24 |
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
|
25 |
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
|
26 |
|
|
|
|
|
27 |
Resources and Technical Documentation:
|
28 |
|
29 |
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
|
|
|
24 |
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
|
25 |
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
|
26 |
|
27 |
+
**Warning** As of 21/05/2024 - this model is not working in the latest release of llama.cpp (and it won't work in any wrapper that uses it currently.)
|
28 |
+
|
29 |
Resources and Technical Documentation:
|
30 |
|
31 |
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
|