zaq-hack commited on
Commit
b3db526
1 Parent(s): b3b596a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -6,9 +6,14 @@ base_model:
6
  - Undi95/Llama-3-LewdPlay-8B
7
  library_name: transformers
8
  tags:
9
- - mergekit
 
10
  - merge
11
  ---
 
 
 
 
12
 
13
  # LewdPlay-8B
14
 
 
6
  - Undi95/Llama-3-LewdPlay-8B
7
  library_name: transformers
8
  tags:
9
+ - not-for-all-audiences
10
+ - nsfw
11
  - merge
12
  ---
13
+ * <span style="color:orange">I'm just tinkering. All credit to the original creator: [Undi](https://huggingface.co/Undi95).</span>
14
+ * <span style="color:orange">"rpcal" designates that this model was quantized using an [RP-specific data set](https://huggingface.co/datasets/royallab/PIPPA-cleaned) instead of the generalized wiki or llama data set. This is likely the last model I will create with this method as Llama-3-8B seems to get markedly dumber by doing it this way. In previous models, it was difficult to tell, but the margin of error increase from quantizing Llama-3-8B makes it obvious which method is better. I deleted the lower quants of rpcal because they are pretty dumb by comparison. This one seems to work fine, and is the only one I would recommend if you want to compare with the other, yourself. </span>
15
+ * <span style="color:orange">This model: EXL2 @ 8.0 bpw using RP data for calibration.</span>
16
+ ---
17
 
18
  # LewdPlay-8B
19