Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

exl2 quant, original: https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b (original readme below)

exl2 quant, original: https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b (original readme below)

exl2 quant, original: https://huggingface.co/BlueNipples/DaringLotus-v2-10.7b (original readme below)


DaringLotus-10.7B-v2

This is a dare ties merge of https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B and it's parent models. It shares it's good prose, and relatively decent coherency, being a little bit more on the side of prose, and a little bit less on the side of coherency. I like this model for generating great prose if I feel like regening a bit. It's a good model as is the other model for RP, and I think both these merged models probably stand up with the best in their weight class (11-13). Which you prefer might be a matter of context and preference which is why I've uploaded both. Credit to Nyx and Sao10k for their models contributions (Frostmaid, FrostWind and SolarDoc), as well as Undi95 and Ikari for Noromaid, the developers of Mergekit, and whomever contributed the medical model used in the frankenmerge portion.

GGUF (Small selection of Imatrix and regular k-quants): https://huggingface.co/BlueNipples/DaringLotus-SnowLotus-10.7b-IQ-GGUF EXL2: https://huggingface.co/zaq-hack/DaringLotus-v2-10.7b-bpw500-h6-exl2

Format Notes

Solar is desgined for 4k context, but Nyx reports that his merge works to 8k. Given this has a slerp gradient back into that, I'm not sure which applies here. Alpaca instruct formatting.

Recipe

  • model: ./Frostmaid parameters: density: [0.45] # density gradient weight: 0.23
  • model: ./FrostMed parameters: density: [0.35] # density gradient
    weight: 0.18
  • model: ./SnowLotus-10.7B-v2 parameters: density: [1] # density gradient weight: 1
Downloads last month
4