zaq-hack commited on
Commit
f21dc7d
1 Parent(s): efc81ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -5,9 +5,9 @@ tags:
5
  - nsfw
6
  - merge
7
  ---
8
- *<span style="color:orange">I'm just tinkering. All credit to the original creators: [Noromaid is hot.](https://huggingface.co/NeverSleep)</span>
9
- *<span style="color:orange">"rpcal" designates that this model was quantized using an [RP-specific data set](https://huggingface.co/datasets/royallab/PIPPA-cleaned) instead of the generalized wiki or llama data set. I have been unable to quantify real differences in the same model "compressed" using these two different methods. It "feels" better, but I can't put my finger on why. My current theory is that it gives "good responses" just as often as a similarly quantized model, however, good responses are "subjectively better" with this method. Any help quantifying this would be appreciated. [Anyone know Ayumi?](https://ayumi.m8geil.de/erp4_chatlogs/?S=erv3_0#!/index)</span>
10
- *<span style="color:orange">This model: EXL2 @ 3.5 bpw using RP data for calibration.</span>
11
 
12
  ## MiquMaid v3
13
 
 
5
  - nsfw
6
  - merge
7
  ---
8
+ * <span style="color:orange">I'm just tinkering. All credit to the original creators: [Noromaid is hot.](https://huggingface.co/NeverSleep)</span>
9
+ * <span style="color:orange">"rpcal" designates that this model was quantized using an [RP-specific data set](https://huggingface.co/datasets/royallab/PIPPA-cleaned) instead of the generalized wiki or llama data set. I have been unable to quantify real differences in the same model "compressed" using these two different methods. It "feels" better, but I can't put my finger on why. My current theory is that it gives "good responses" just as often as a similarly quantized model, however, good responses are "subjectively better" with this method. Any help quantifying this would be appreciated. [Anyone know Ayumi?](https://ayumi.m8geil.de/erp4_chatlogs/?S=erv3_0#!/index)</span>
10
+ * <span style="color:orange">This model: EXL2 @ 3.5 bpw using RP data for calibration.</span>
11
 
12
  ## MiquMaid v3
13