8bpw/h8 exl2 quantization of xxx777xxxASD/ChaoticSoliloquy-4x8B using default exllamav2 calibration dataset.
ORIGINAL CARD:
(Maybe i'll change the waifu picture later)
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
ChaoticSoliloquy-4x8B
base_model: jeiku_Chaos_RP_l3_8B
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
- source_model: jeiku_Chaos_RP_l3_8B
- source_model: openlynn_Llama-3-Soliloquy-8B
- source_model: Sao10K_L3-Solana-8B-v1
Models used
- ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B
- jeiku/Chaos_RP_l3_8B
- openlynn/Llama-3-Soliloquy-8B
- Sao10K/L3-Solana-8B-v1
Vision
Prompt format: Llama 3
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.