THIS VERSION IS NOW DEPRECATED. USE V3-0.2. V2 HAS PROBLEMS WITH ALIGNMENT AND THE NEW VERSION IS A SUBSTANTIAL IMPROVMENT!
Erosumika-7B-v2
Model Details
A DARE TIES merge between Nitral's Kunocchini-7b, Epiculous' Mika-7B and my FlatErosAlpha, a flattened(in order to keep the vocab size 32000) version of tavtav's eros-7B-ALPHA. In my brief testing, v2 is a significant improvement over the original Erosumika; I guess it won the DARE TIES lottery. Alpaca and Mistral seem to work best. Chat-ML might also work but I expect it to never end generations. Anything goes!
Due to it being an experimental model, there are some quirks...
- Rare occasion to misspell words
- Very rare occasion to have random formatting artifact at the end of generations
Imatrix GGUF quants by Lewdiculous
Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
base_model: localfultonextractor/FlatErosAlpha
models:
- model: localfultonextractor/FlatErosAlpha
- model: Epiculous/Mika-7B
parameters:
density: 0.5
weight: 0.25
- model: Nitral-AI/Kunocchini-7b
parameters:
density: 0.5
weight: 0.75
merge_method: dare_ties
dtype: bfloat16
- Downloads last month
- 2,316
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.610
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard86.290
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard62.510
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard69.000
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.270
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard45.190