aashish1904
commited on
Commit
•
a445855
1
Parent(s):
723a76a
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
license: llama3
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- roleplay
|
9 |
+
- llama3
|
10 |
+
- sillytavern
|
11 |
+
- idol
|
12 |
+
|
13 |
+
---
|
14 |
+
|
15 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
16 |
+
|
17 |
+
|
18 |
+
# QuantFactory/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF
|
19 |
+
This is quantized version of [aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K) created using llama.cpp
|
20 |
+
|
21 |
+
# Original Model Card
|
22 |
+
|
23 |
+
# The final version of Llama 3.0 will be followed by the next iteration starting from Llama 3.1.
|
24 |
+
# Special Thanks:
|
25 |
+
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
|
26 |
+
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
|
27 |
+
- mradermacher's superb gguf version, thank you for your conscientious and responsible dedication.
|
28 |
+
- https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-i1-GGUF
|
29 |
+
- https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF
|
30 |
+
|
31 |
+
# These are my own quantizations (updated almost daily).
|
32 |
+
The difference with normal quantizations is that I quantize the output and embed tensors to f16.
|
33 |
+
and the other tensors to 15_k,q6_k or q8_0.
|
34 |
+
This creates models that are little or not degraded at all and have a smaller size.
|
35 |
+
They run at about 3-6 t/sec on CPU only using llama.cpp
|
36 |
+
And obviously faster on computers with potent GPUs
|
37 |
+
- the fast cat at [ZeroWw/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-32K-GGUF)
|
38 |
+
|
39 |
+
# Model Description:
|
40 |
+
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
|
41 |
+
- Saving money(LLama 3)
|
42 |
+
- only test en.
|
43 |
+
- Input Models input text only. Output Models generate text and code only.
|
44 |
+
- Uncensored
|
45 |
+
- Quick response
|
46 |
+
- The underlying model used is winglian/Llama-3-8b-64k-PoSE (The theoretical support is 64k, but I have only tested up to 32k. :)
|
47 |
+
- A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
|
48 |
+
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
|
49 |
+
- Roleplay
|
50 |
+
- Specialized in various role-playing scenarios
|
51 |
+
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test)
|
52 |
+
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
|
53 |
+
![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K/resolve/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.png)
|
54 |
+
|
55 |
+
## virtual idol Twitter
|
56 |
+
- https://x.com/aifeifei799
|
57 |
+
|
58 |
+
# Questions
|
59 |
+
- The model's response results are for reference only, please do not fully trust them.
|
60 |
+
|
61 |
+
|
62 |
+
# Stop Strings
|
63 |
+
```python
|
64 |
+
stop = [
|
65 |
+
"## Instruction:",
|
66 |
+
"### Instruction:",
|
67 |
+
"<|end_of_text|>",
|
68 |
+
" //:",
|
69 |
+
"</s>",
|
70 |
+
"<3```",
|
71 |
+
"### Note:",
|
72 |
+
"### Input:",
|
73 |
+
"### Response:",
|
74 |
+
"### Emoticons:"
|
75 |
+
],
|
76 |
+
```
|
77 |
+
# Model Use
|
78 |
+
- Koboldcpp https://github.com/LostRuins/koboldcpp
|
79 |
+
- Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues.
|
80 |
+
- LM Studio https://lmstudio.ai/
|
81 |
+
- Please test again using the Default LM Studio Windows preset.
|
82 |
+
- llama.cpp https://github.com/ggerganov/llama.cpp
|
83 |
+
- Backyard AI https://backyard.ai/
|
84 |
+
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
|
85 |
+
- Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K-Q4_K_S-imat.gguf?download=true
|
86 |
+
- more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
|
87 |
+
# character
|
88 |
+
- https://character-tavern.com/
|
89 |
+
- https://characterhub.org/
|
90 |
+
- https://pygmalion.chat/
|
91 |
+
- https://aetherroom.club/
|
92 |
+
- https://backyard.ai/
|
93 |
+
- Layla AI chatbot
|
94 |
+
### If you want to use vision functionality:
|
95 |
+
* You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp).
|
96 |
+
|
97 |
+
### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
|
98 |
+
|
99 |
+
* You can load the **mmproj** by using the corresponding section in the interface:
|
100 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
|
101 |
+
### Thank you:
|
102 |
+
To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
|
103 |
+
- Hastagaras
|
104 |
+
- Gryphe
|
105 |
+
- cgato
|
106 |
+
- ChaoticNeutrals
|
107 |
+
- mergekit
|
108 |
+
- merge
|
109 |
+
- transformers
|
110 |
+
- llama
|
111 |
+
- Nitral-AI
|
112 |
+
- MLP-KTLim
|
113 |
+
- rinna
|
114 |
+
- hfl
|
115 |
+
- Rupesh2
|
116 |
+
- stephenlzc
|
117 |
+
- theprint
|
118 |
+
- Sao10K
|
119 |
+
- turboderp
|
120 |
+
- TheBossLevel123
|
121 |
+
- winglian
|
122 |
+
- .........
|
123 |
+
---
|
124 |
+
# llama3-8B-DarkIdol-2.3-Uncensored-32K
|
125 |
+
|
126 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
127 |
+
|
128 |
+
## Merge Details
|
129 |
+
### Merge Method
|
130 |
+
|
131 |
+
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using ./llama3-8B-DarkIdol-2.3b as a base.
|
132 |
+
|
133 |
+
### Configuration
|
134 |
+
|
135 |
+
The following YAML configuration was used to produce this model:
|
136 |
+
|
137 |
+
```yaml
|
138 |
+
models:
|
139 |
+
- model: Sao10K/L3-8B-Niitama-v1
|
140 |
+
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
|
141 |
+
- model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
|
142 |
+
- model: turboderp/llama3-turbcat-instruct-8b
|
143 |
+
- model: winglian/Llama-3-8b-64k-PoSE
|
144 |
+
merge_method: model_stock
|
145 |
+
base_model: winglian/Llama-3-8b-64k-PoSE
|
146 |
+
dtype: bfloat16
|
147 |
+
|
148 |
+
models:
|
149 |
+
- model: maldv/badger-writer-llama-3-8b
|
150 |
+
- model: underwoods/writer-8b
|
151 |
+
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
|
152 |
+
- model: vicgalle/Roleplay-Llama-3-8B
|
153 |
+
- model: cgato/TheSalt-RP-L3-8b-DPO-v0.3.2-e0.15.2
|
154 |
+
- model: ./llama3-8B-DarkIdol-2.3a
|
155 |
+
merge_method: model_stock
|
156 |
+
base_model: ./llama3-8B-DarkIdol-2.3a
|
157 |
+
dtype: bfloat16
|
158 |
+
|
159 |
+
models:
|
160 |
+
- model: Rupesh2/Meta-Llama-3-8B-abliterated
|
161 |
+
- model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
|
162 |
+
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
|
163 |
+
- model: theprint/Llama-3-8B-Lexi-Smaug-Uncensored
|
164 |
+
- model: vicgalle/Unsafe-Llama-3-8B
|
165 |
+
- model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
|
166 |
+
- model: ./llama3-8B-DarkIdol-2.3b
|
167 |
+
merge_method: model_stock
|
168 |
+
base_model: ./llama3-8B-DarkIdol-2.3b
|
169 |
+
dtype: bfloat16
|
170 |
+
```
|
171 |
+
|