File size: 2,555 Bytes
f3413be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c3f42b
 
 
 
f3413be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
---
```
  e88 88e                               d8     
 d888 888b  8888 8888  ,"Y88b 888 8e   d88     
C8888 8888D 8888 8888 "8" 888 888 88b d88888   
 Y888 888P  Y888 888P ,ee 888 888 888  888     
  "88 88"    "88 88"  "88 888 888 888  888     
      b                                        
      8b,                                      
 
  e88'Y88                  d8           888    
 d888  'Y  ,"Y88b 888,8,  d88    ,e e,  888    
C8888     "8" 888 888 "  d88888 d88 88b 888    
 Y888  ,d ,ee 888 888     888   888   , 888    
  "88,d88 "88 888 888     888    "YeeP" 888    
                                               
PROUDLY PRESENTS         
```
# Llama-3-TenyxChat-DaybreakStorywriter-70B-exl2-rpcal

Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.

Branches:
- `main` -- `measurement.json`
- `6b8h` -- 6bpw, 8bit lm_head
- `4.65b6h` -- 4.65bpw, 6bit lm_head
- `2.25b6h` -- 2.25bpw, 6bit lm_head

Original model link: [Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B](https://huggingface.co/Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B)

### Quanter's notes
As apparently the default dataset is supposed to be better in nearly all situations, I decided to start quanting using that in addition to my standard rpcal-fare. I'd appreciate real-world tests to confirm the hypothesis, though, so please leave a comment if you find rpcal to be better than what I've dubbed 'longcal'.


Original model README below.

-----

## Caution: This model is capable of producing adult content. 

This model is a 50/50 SLERP merge between [crestf411/L3-70B-daybreak-storywriter-v0.4](https://huggingface.co/crestf411/L3-70B-daybreak-storywriter-v0.4)

and

[tenyx/Llama3-TenyxChat-70B](https://huggingface.co/tenyx/Llama3-TenyxChat-70B)

The resulting model scores significantly higher on the super top secret, private **NALA** evaluation *(Neural-linguistic Assessment of Lifelike Approximation)*<sup>[1]</sup> making it a great choice for novelty RP scenarios.

**TenyxChat-DaybreakStorywriter: 76.52**

DeepSeek-Coder-V2-Instruct: 68.20

TenyxChat: 57.89

This model utilizes the Llama-3-Instruct prompt format. 


<sup>1. The NALA evaluation is not a proper scientific evaluation and should not be used to inform any decisions related to personal safety, personal enjoyment, or any other critical or non-critical matter. NALA score is entirely arbitrary and subject to change without notice.</sup>