e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
Llama-3-TenyxChat-DaybreakStorywriter-70B-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
main
--measurement.json
6b8h
-- 6bpw, 8bit lm_head4.65b6h
-- 4.65bpw, 6bit lm_head2.25b6h
-- 2.25bpw, 6bit lm_head
Original model link: Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B
Quanter's notes
As apparently the default dataset is supposed to be better in nearly all situations, I decided to start quanting using that in addition to my standard rpcal-fare. I'd appreciate real-world tests to confirm the hypothesis, though, so please leave a comment if you find rpcal to be better than what I've dubbed 'longcal'.
Original model README below.
Caution: This model is capable of producing adult content.
This model is a 50/50 SLERP merge between crestf411/L3-70B-daybreak-storywriter-v0.4
and
The resulting model scores significantly higher on the super top secret, private NALA evaluation (Neural-linguistic Assessment of Lifelike Approximation)[1] making it a great choice for novelty RP scenarios.
TenyxChat-DaybreakStorywriter: 76.52
DeepSeek-Coder-V2-Instruct: 68.20
TenyxChat: 57.89
This model utilizes the Llama-3-Instruct prompt format.
1. The NALA evaluation is not a proper scientific evaluation and should not be used to inform any decisions related to personal safety, personal enjoyment, or any other critical or non-critical matter. NALA score is entirely arbitrary and subject to change without notice.