File size: 3,503 Bytes
c46558e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c280f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c46558e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE

language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
  parameters:
    temperature: 0.7
widget:
  - messages:
      - role: user
        content: What's the difference between a banana and a strawberry?
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.3

Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_2bpw_exl2)**</center> | <center>1217 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-2_5bpw_exl2)**</center> | <center>1342 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_0bpw_exl2)**</center> | <center>1558 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_5bpw_exl2)**</center> | <center>1774 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-3_75bpw_exl2)**</center> | <center>1882 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_0bpw_exl2)**</center> | <center>1990 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-4_25bpw_exl2)**</center> | <center>2099 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-5_0bpw_exl2)**</center> | <center>2423 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_0bpw_exl2)**</center> | <center>2870 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-6_5bpw_exl2)**</center> | <center>3089 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/failspy_Phi-3-mini-4k-geminified-8_0bpw_exl2)**</center> | <center>3620 MB</center> | <center>8</center> |


# Phi-3-mini-128k-instruct- ~~abliterated-v3~~ -geminified

Credit to [u/Anduin1357](https://www.reddit.com/user/Anduin1357/) on reddit for the name who [wrote this comment](https://www.reddit.com/r/LocalLLaMA/comments/1cmh6ru/comment/l31zkan/)

[My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)

## What's this?
Well, after my abliterated models, I figured I should cover all the possible ground of such work and introduce a model that acts like the polar opposite of them. This is the result of that, and I feel it lines it up in performance to a certain search engine's AI model series.

## Summary

This is [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.

This model has been orthogonalized to act more like certain rhymes-with-Shmemini models.