File size: 3,112 Bytes
394dfbd
 
 
 
ad35b36
ba0ad97
 
 
ad35b36
e628690
4aa1b51
c54626f
 
 
4aa1b51
9b4b6c9
 
 
 
ad35b36
 
ba0ad97
6ec972b
 
 
 
 
 
 
 
 
 
 
f8e9dd8
 
 
 
 
8bcd13c
1b07974
 
 
3028153
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
tags:
- merge
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/X0i6-KleZPdNqD1qFq3tK.png)

# DaringLotus-10.7B-v2

This is a dare ties merge of https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B and it's parent models. It shares it's good prose, and relatively decent coherency, being a little bit more on the side of prose, and a little bit less on the side of coherency. I like this model for generating great prose if I feel like regening a bit. It's a good model as is the other model for RP, and I think both these merged models probably stand up with the best in their weight class (11-13). Which you prefer might be a matter of context and preference which is why I've uploaded both. Credit to Nyx and Sao10k for their models contributions (Frostmaid, FrostWind and SolarDoc), as well as Undi95 and Ikari for Noromaid, the developers of Mergekit, and whomever contributed the medical model used in the frankenmerge portion.

GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2s: 
https://huggingface.co/zaq-hack/DaringLotus-v2-10.7b-bpw500-h6-exl2
https://huggingface.co/lucyknada/DaringLotus-v2-10.7B-3bpw-exl2

### Format Notes

Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.

## Recipe
  
  - model: ./Frostmaid
    parameters:
      density: [0.45] # density gradient
      weight: 0.23
  - model: ./FrostMed
    parameters:
      density: [0.35] # density gradient    
      weight: 0.18
  - model: ./SnowLotus-10.7B-v2
    parameters:
      density: [1] # density gradient
      weight: 1

### Ayumi Index

http://ayumi.m8geil.de/erp4_chatlogs/?S=rma_0#!/index

In the Ayumi ERPv4 Chat Log Index, SnowLotus scores a 94.10 in Flesch which means it produces more complex sentences than Daring (quite complex), DaringLotus scores higher in Var and Ad[jv], which means it makes heavier use of adjectives and adverbs (is more descriptive). Noteably Daring is in the top 8 for adjectives in a sentence, highest in it's weight class if you discount the chinese model, and in general both models did very well on this metric (SnowLotus ranks higher here than anything above it in IQ4), showcasing their descriptive ability. 

SnowLotus beats DaringLotus on IQ4 with a score of 70.94, only bet by SOLAR Instruct and Fimbulvetr in it's weight class (altho also noteably Kunoichi 7b by a slim margin), DaringLotus is a bit lower at 65.37 - not as smart. 

Interestingly the benchmarking here showed repetition for both models (which I haven't seen), but more with SnowLotus - so it's possible Daring repeats less than SnowLotus? These roughly confirm my impressions of the differences, altho potentially reveal some new details too. I've had a great experience RPing with these models, and seen no repetition myself, but be sure to use MinP or DynaTemp rather than the older samplers and be prepared to regen anything they get stuck on!