File size: 2,592 Bytes
27c9ab4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: cc-by-nc-4.0
language:
- en
---
```
  e88 88e                               d8     
 d888 888b  8888 8888  ,"Y88b 888 8e   d88     
C8888 8888D 8888 8888 "8" 888 888 88b d88888   
 Y888 888P  Y888 888P ,ee 888 888 888  888     
  "88 88"    "88 88"  "88 888 888 888  888     
      b                                        
      8b,                                      
 
  e88'Y88                  d8           888    
 d888  'Y  ,"Y88b 888,8,  d88    ,e e,  888    
C8888     "8" 888 888 "  d88888 d88 88b 888    
 Y888  ,d ,ee 888 888     888   888   , 888    
  "88,d88 "88 888 888     888    "YeeP" 888    
                                               
PROUDLY PRESENTS         
```
# L3-70B-Euryale-v2.1-exl2-rpcal

Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.

Branches:
- `main` -- `measurement.json`
- `8b8h` -- 8bpw, 8bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head
- `4.65b6h` -- 4.65bpw, 6bit lm_head
- `4.5b6h` -- 4.5bpw, 6bit lm_head
- `3.75b6h` -- 3.75bpw, 6bit lm_head
- `3.5b6h` -- 3.5bpw, 6bit lm_head
- `2.25b6h` -- 2.25bpw, 6bit lm_head

Original model link: [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)

Original model README below.

-----

![Euryale](https://images7.alphacoders.com/921/921311.jpg)

**She's back!**

Stheno's Sister Model, designed to impress.

```
- Same Dataset used as Stheno v3.2 -> See notes there.
- LoRA Fine-Tune -> FFT is simply too expensive.
- Trained over 8x H100 SXMs and then some more afterwards.
```

**Testing Notes**
```
- Better prompt adherence.
- Better anatomy / spatial awareness.
- Adapts much better to unique and custom formatting / reply formats.
- Very creative, lots of unique swipes.
- Is not restrictive during roleplays. 
- Feels like a big brained version of Stheno.
```

*Likely due to it being a 70B model instead of 8B. Similar vibes comparing back in llama 2, where 70B models were simply much more 'aware' in the subtler areas and contexts a smaller model like a 7B or 13B simply were not able to handle.*

---

**Recommended Sampler Settings**:
```
Temperature - 1.17
min_p - 0.075
Repetition Penalty - 1.10
```

**SillyTavern Instruct Settings**:
<br>Context Template: Llama-3-Instruct-Names
<br>Instruct Presets: [Euryale-v2.1-Llama-3-Instruct](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1/blob/main/Euryale-v2.1-Llama-3-Instruct.json)

---

As per usual, support me here:

Ko-fi: https://ko-fi.com/sao10k

```
Art by wada_kazu / わだかず (pixiv page private?)
```