librarian-bot's picture
Librarian Bot: Add merge tag to model
95ae4b9 verified
|
raw
history blame
No virus
1.78 kB
---
tags:
- merge
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64bb1109aaccfd28b023bcec/X0i6-KleZPdNqD1qFq3tK.png)
# DaringLotus-10.7B-v2
This is a dare ties merge of https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B and it's parent models. It shares it's good prose, and relatively decent coherency, being a little bit more on the side of prose, and a little bit less on the side of coherency. I like this model for generating great prose if I feel like regening a bit. It's a good model as is the other model for RP, and I think both these merged models probably stand up with the best in their weight class (11-13). Which you prefer might be a matter of context and preference which is why I've uploaded both. Credit to Nyx and Sao10k for their models contributions (Frostmaid, FrostWind and SolarDoc), as well as Undi95 and Ikari for Noromaid, the developers of Mergekit, and whomever contributed the medical model used in the frankenmerge portion.
GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF
EXL2s:
https://huggingface.co/zaq-hack/DaringLotus-v2-10.7b-bpw500-h6-exl2
https://huggingface.co/lucyknada/DaringLotus-v2-10.7B-3bpw-exl2
### Format Notes
Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.
## Recipe
- model: ./Frostmaid
parameters:
density: [0.45] # density gradient
weight: 0.23
- model: ./FrostMed
parameters:
density: [0.35] # density gradient
weight: 0.18
- model: ./SnowLotus-10.7B-v2
parameters:
density: [1] # density gradient
weight: 1