File size: 4,548 Bytes
f927057 cf6db69 f927057 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
license: llama3
license_name: llama3
license_link: LICENSE
library_name: transformers
tags:
- not-for-all-audiences
- mergekit
- llama-cpp
- gguf-my-repo
datasets:
- crestf411/LimaRP-DS
- Gryphe/Sonnet3.5-Charcard-Roleplay
- anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system
- anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system
- anthracite-org/kalo-opus-instruct-3k-filtered-no-system
- anthracite-org/nopm_claude_writing_fixed
base_model: crestf411/L3.1-8B-Slush-v1.1
---
# Triangle104/L3.1-8B-Slush-v1.1-Q6_K-GGUF
This model was converted to GGUF format from [`crestf411/L3.1-8B-Slush-v1.1`](https://huggingface.co/crestf411/L3.1-8B-Slush-v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/crestf411/L3.1-8B-Slush-v1.1) for more details on the model.
---
Model details:
-
Slush is a two-stage model trained with high LoRA dropout, where stage 1 is a pretraining continuation on the base model, aimed at boosting the model's creativity and writing capabilities. This is then merged into the instruction tune model, and stage 2 is a fine tuning step on top of this to further enhance its roleplaying capabilities and/or to repair any damage caused in the stage 1 merge.
This is an initial experiment done on the at-this-point-infamous Llama 3.1 8B model, in an attempt to retain its smartness while addressing its abysmal lack of imagination/creativity. As always, feedback is welcome, and begone if you demand perfection.
The second stage, like the Sunfall series, follows the Silly Tavern preset, so ymmv in particular if you use some other tool and/or preset.
This update (v1.1) addresses some of the feedback from the first iteration by ramping down the training parameters, and also introduces a custom merge using mergekit.
Parameter suggestions:
-
I did all my testing with temp 1, min-p 0.1, DRY 0.8. I enabled XTC at higher contexts.
Training details:
-
Stage 1 (continued pretraining)
Target: meta-llama/Llama-3.1-8B (resulting LoRA merged into meta-llama/Llama-3.1-8B-Instruct)
LoRA dropout 0.5 (motivation)
LoRA rank 64, alpha 128 (motivation)
LR cosine 4e-6
LoRA+ with LR Ratio: 15
Context size: 16384
Gradient accumulation steps: 4
Epochs: 1
Stage 2 (fine tune)
Target: Stage 1 model
LoRA dropout 0.5
LoRA rank 32, alpha 64
LR cosine 5e-6 (min 5e-7)
LoRA+ with LR Ratio: 15
Context size: 16384
Gradient accumulation steps: 4
Epochs: 2
Merge Method
-
This model was merged using the TIES merge method using meta-llama/Llama-3.1-8B as a base.
Configuration
The following YAML configuration was used to produce this model:
models:
- model: stage1-on-instruct
parameters:
weight: 1.5
density: 1
- model: stage2-on-stage1
parameters:
weight: 1.5
density: 1
- model: meta-llama/Llama-3.1-8B-Instruct
parameters:
weight: 1
density: 1
merge_method: ties
base_model: meta-llama/Llama-3.1-8B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
tokenizer_source: meta-llama/Llama-3.1-8B-Instruct
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/L3.1-8B-Slush-v1.1-Q6_K-GGUF --hf-file l3.1-8b-slush-v1.1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/L3.1-8B-Slush-v1.1-Q6_K-GGUF --hf-file l3.1-8b-slush-v1.1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/L3.1-8B-Slush-v1.1-Q6_K-GGUF --hf-file l3.1-8b-slush-v1.1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/L3.1-8B-Slush-v1.1-Q6_K-GGUF --hf-file l3.1-8b-slush-v1.1-q6_k.gguf -c 2048
```
|