File size: 5,151 Bytes
9e77568
 
 
 
 
 
 
 
 
 
 
 
 
 
0122655
9e77568
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cb93c2f
9e77568
 
 
 
0122655
 
9e77568
 
 
 
 
 
 
 
 
53e8b75
9e77568
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
license: llama3
language:
- en
- ja
- zh
tags:
  - roleplay
  - llama3
  - sillytavern
  - idol
---
# Special Thanks:
 - Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
 - https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request

# Model Description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0/resolve/main/DarkIdol_test_openai_api_lmstudio.py?download=true) 

![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.1/resolve/main/2024-06-20_20-01-51_9319.png)

# Chang Log
### 2024-06-20
- Using the underlying model.(Meta-Llama-3-8B-Instruct)
- Integrating the numerous models I previously created.look at base_model.

# Stop Strings
```python
    stop = [
      "## Instruction:",
      "### Instruction:",
      "<|end_of_text|>",
      "  //:",
      "</s>",
      "<3```"
    ],
```
# Model Use
- Koboldcpp https://github.com/LostRuins/koboldcpp
- Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues.
- LM Studio https://lmstudio.ai/
- llama.cpp https://github.com/ggerganov/llama.cpp
- Backyard AI https://backyard.ai/
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
- Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request/blob/main/llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf?download=true
- more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-1.1-GGUF-IQ-Imatrix-Request
# character
- https://character-tavern.com/
- https://characterhub.org/
- https://pygmalion.chat/
- https://aetherroom.club/
- https://backyard.ai/
- Layla AI chatbot

### If you want to use vision functionality:
 * You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp).
 
### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
 
 * You can load the **mmproj** by using the corresponding section in the interface:
 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)


### Thank you:
 To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
- Hastagaras
- Gryphe
- cgato
- ChaoticNeutrals
- mergekit
- merge
- transformers
- llama
- Nitral-AI
- .........

---
base_model:
- cgato/L3-TheSpice-8b-v0.8.3
- aifeifei798/llama3-8B-feifei-1.0
- aifeifei798/Meta-Llama-3-8B-Instruct
- Nitral-AI/Hathor_RP-v.01-L3-8B
- aifeifei798/llama3-8B-aifeifei-1.2
- aifeifei798/llama3-8B-aifeifei-1.3
- aifeifei798/llama3-8B-DarkIdol-1.0
- aifeifei798/llama3-8B-aifeifei-1.0
- aifeifei798/llama3-8B-aifeifei-1.1
library_name: transformers
tags:
- mergekit
- merge

---
# llama3-8B-DarkIdol-1.1

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [aifeifei798/Meta-Llama-3-8B-Instruct](https://huggingface.co/aifeifei798/Meta-Llama-3-8B-Instruct) as a base.

### Models Merged

The following models were included in the merge:
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [aifeifei798/llama3-8B-feifei-1.0](https://huggingface.co/aifeifei798/llama3-8B-feifei-1.0)
* [Nitral-AI/Hathor_RP-v.01-L3-8B](https://huggingface.co/Nitral-AI/Hathor_RP-v.01-L3-8B)
* [aifeifei798/llama3-8B-aifeifei-1.2](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.2)
* [aifeifei798/llama3-8B-aifeifei-1.3](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3)
* [aifeifei798/llama3-8B-DarkIdol-1.0](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0)
* [aifeifei798/llama3-8B-aifeifei-1.0](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.0)
* [aifeifei798/llama3-8B-aifeifei-1.1](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.1)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: cgato/L3-TheSpice-8b-v0.8.3
  - model: Nitral-AI/Hathor_RP-v.01-L3-8B
  - model: aifeifei798/llama3-8B-feifei-1.0
  - model: aifeifei798/llama3-8B-aifeifei-1.0
  - model: aifeifei798/llama3-8B-aifeifei-1.1
  - model: aifeifei798/llama3-8B-aifeifei-1.2
  - model: aifeifei798/llama3-8B-aifeifei-1.3
  - model: aifeifei798/llama3-8B-DarkIdol-1.0
merge_method: model_stock
base_model: aifeifei798/Meta-Llama-3-8B-Instruct
dtype: bfloat16

```