Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Euryale

L3-70B-Euryale-v2.1 - EXL2 2.3bpw

This is a 2.3bpw EXL2 quant of Sao10K/L3-70B-Euryale-v2.1

This quant was made using exllamav2-0.0.21 with default dataset, rpcal_mk2 quant turned out completely broken (writing random tokens) on long contexts so I won't be uploading it for now.

I tested this quant shortly in some random RPs (including one over 8k context) and it seems to work fine. For longer context (over 8k) it seems better to use bigger alpha_value in RoPE scaling than recommended in webui (for example at least 2.5 for 12k context).

I could fit this model in 24GB VRAM with 13.5k context on Windows.

Prompt Templates

See original readme file for details. In general this model uses llama3 prompt template.

Original readme below


She's back!

Stheno's Sister Model, designed to impress.

- Same Dataset used as Stheno v3.2 -> See notes there.
- LoRA Fine-Tune -> FFT is simply too expensive.
- Trained over 8x H100 SXMs and then some more afterwards.

Testing Notes

- Better prompt adherence.
- Better anatomy / spatial awareness.
- Adapts much better to unique and custom formatting / reply formats.
- Very creative, lots of unique swipes.
- Is not restrictive during roleplays. 
- Feels like a big brained version of Stheno.

Likely due to it being a 70B model instead of 8B. Similar vibes comparing back in llama 2, where 70B models were simply much more 'aware' in the subtler areas and contexts a smaller model like a 7B or 13B simply were not able to handle.


Recommended Sampler Settings:

Temperature - 1.17
min_p - 0.075
Repetition Penalty - 1.10

SillyTavern Instruct Settings:
Context Template: Llama-3-Instruct-Names
Instruct Presets: Euryale-v2.1-Llama-3-Instruct


As per usual, support me here:

Ko-fi: https://ko-fi.com/sao10k

Art by wada_kazu / わだかず (pixiv page private?)
Downloads last month
23
Inference API
Input a message to start chatting with DeusImperator/L3-70B-Euryale-v2.1_exl2_2.3bpw.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.