L3.3-MS-Nevoria-70b / README.md
Steelskull's picture
Update README.md
31ed9f2 verified
metadata
base_model:
  - Sao10K/L3.3-70B-Euryale-v2.3
  - nbeerbower/Llama-3.1-Nemotron-lorablated-70B
  - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  - SicariusSicariiStuff/Negative_LLAMA_70B
  - TheDrummer/Anubis-70B-v1
library_name: transformers
tags:
  - merge
license: apache-2.0

L3.3-MS-Nevoria-70B

Model banner

Model Information

L3.3-MS-Nevoria-70B

L3.3 = Llama 3.3 MS = Model Stock 70B Parameters

Model Composition

This model combines the storytelling capabilities of EVA with the detailed scene descriptions from EURYALE and Anubis. It's further enhanced with Negative_LLAMA to reduce positive bias, with a touch of Nemotron mixed in.

The lorablated model base choice was intentional, creating unique weight interactions similar to the original Astoria model and Astoria V2 model. This "weight twisting" effect, achieved by subtracting the lorablated base model during merging, creates an interesting balance in the model's behavior. While unconventional compared to sequential component application, this approach was chosen for its unique response characteristics.

User Reviews
@Geechan - Discord

@Steel Have only briefly tested so far, but you really cooked up an amazing merge with this one, and I mean that wholeheartedly. Insane creativity, perfect character adherence and dialogue, loves to slow burn and take its time, minimal sloppy patterns and writing, and such a breath of fresh air in many ways. I'm enjoying my results with 1 temp and 0.99 TFS (close to something like 0.015 min P). Letting the model be creative and wild is so fun and makes me want to RP more.

No positivity bias either; violent scenes will result in my death and/or suffering, as they should, and I don't see any soft refusals either. ERP has no skimming of details or refusals like you see on some other L3.3 tunes too

IGODZOL - Huggingface

I honestly have no idea why (maybe the negative llama is having that great of an influence) but this merge is miles above the individual tunes that went into making it. Good sir, this model has just become my daily driver. Chapeau bas

@thana_alt - Discord

I'm thoroughly impressed by this merge of Llama 3.3. It successfully addresses the positivity bias prevalent in the base Llama model, ensuring a more accurate and balanced response. The adherence to system prompts is also notable, with the model demonstrating a keen understanding of context and instruction.

The prose generated by this model is truly exceptional - it's almost as if a skilled chef has carefully crafted each sentence to create a rich and immersive experience. I put this to the test in an adventure scenario, where I had about 10,000 tokens of lorebooks and was managing nine characters simultaneously. Despite the complexity, the model performed flawlessly, keeping track of each character's location and activity without any confusion - even when they were in different locations.

I also experimented with an astral projection type of power, and was impressed to see that the model accurately discerned that I wasn't physically present in a particular location. Another significant advantage of this model is the lack of impersonation issues, allowing for seamless role-playing and storytelling.

The capacity of this model is equally impressive, as I was able to load up to 110,000 tokens without encountering any issues. In fact, I successfully tested it with up to 70,000 tokens without experiencing any breakdown or degradation in performance.

When combined with the "The Inception Presets - Methception Llamaception Qwenception" prompt preset from https://huggingface.co/Konnect1221/ , this model truly shines, bringing out the best in the Llama 3.3 architecture. Overall, I'm extremely satisfied with this merge and would highly recommend it to anyone looking to elevate their storytelling and role-playing experiences.

UGI-Benchmark Results:

πŸ† Highest ranked 70b as of 01/17/2025. View Full Leaderboard β†’

Core Metrics

UGI Score 56.75
Willingness Score 7.5/10
Natural Intelligence 41.09
Coding Ability 20

Model Information

Political Lean -8.1%
Ideology Liberalism
Parameters 70B
Aggregated Scores
Diplomacy 61.9%
Government 45.9%
Economy 43.9%
Society 60.1%
Individual Scores
Federal 44.2% Unitary
Democratic 66.2% Autocratic
Security 48.1% Freedom
Nationalism 40.4% Int'l
Militarist 30.4% Pacifist
Assimilationist 43.3% Multiculturalist
Collectivize 43.8% Privatize
Planned 43.1% LaissezFaire
Isolationism 44.8% Globalism
Irreligious 55.4% Religious
Progressive 59.6% Traditional
Acceleration 65.2% Bioconservative

Open LLM-Benchmark Results:

Average Score: 43.92% View Full Leaderboard β†’
IFEval 69.63%
BBH 56.60%
MATH 38.82%
GPQA 29.42%
MUSR 18.63%
MMLU-Pro 50.39%

Reccomended Templates & Prompts

Quantized Versions

GGUF Quantizations

EXL2 Quantizations

Support the Project:

Support on Ko-fi