--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE language: - multilingual pipeline_tag: text-generation tags: - nlp - code inference: parameters: temperature: 0.7 widget: - messages: - role: user content: What's the difference between a banana and a strawberry? --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.3 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |
**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**
|
1217 MB
|
6
| |
**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**
|
1342 MB
|
6
| |
**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**
|
1558 MB
|
6
| |
**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**
|
1774 MB
|
6
| |
**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**
|
1882 MB
|
6
| |
**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**
|
1990 MB
|
6
| |
**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**
|
2099 MB
|
6
| |
**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**
|
2423 MB
|
6
| |
**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**
|
2870 MB
|
8
| |
**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**
|
3089 MB
|
8
| |
**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**
|
3620 MB
|
8
| # Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) ## What's this? Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series. ## Summary This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.