e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
L3-70B-Euryale-v2.1-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
main
--measurement.json
8b8h
-- 8bpw, 8bit lm_head6b6h
-- 6bpw, 6bit lm_head4.65b6h
-- 4.65bpw, 6bit lm_head4.5b6h
-- 4.5bpw, 6bit lm_head3.75b6h
-- 3.75bpw, 6bit lm_head3.5b6h
-- 3.5bpw, 6bit lm_head2.25b6h
-- 2.25bpw, 6bit lm_head
Original model link: Sao10K/L3-70B-Euryale-v2.1
Original model README below.
She's back!
Stheno's Sister Model, designed to impress.
- Same Dataset used as Stheno v3.2 -> See notes there.
- LoRA Fine-Tune -> FFT is simply too expensive.
- Trained over 8x H100 SXMs and then some more afterwards.
Testing Notes
- Better prompt adherence.
- Better anatomy / spatial awareness.
- Adapts much better to unique and custom formatting / reply formats.
- Very creative, lots of unique swipes.
- Is not restrictive during roleplays.
- Feels like a big brained version of Stheno.
Likely due to it being a 70B model instead of 8B. Similar vibes comparing back in llama 2, where 70B models were simply much more 'aware' in the subtler areas and contexts a smaller model like a 7B or 13B simply were not able to handle.
Recommended Sampler Settings:
Temperature - 1.17
min_p - 0.075
Repetition Penalty - 1.10
SillyTavern Instruct Settings:
Context Template: Llama-3-Instruct-Names
Instruct Presets: Euryale-v2.1-Llama-3-Instruct
As per usual, support me here:
Ko-fi: https://ko-fi.com/sao10k
Art by wada_kazu / わだかず (pixiv page private?)