Edit model card



6bpw/h6 exl2 quantization of xxx777xxxASD/ChaoticSoliloquy-4x8B using default exllamav2 calibration dataset.


ORIGINAL CARD:

image/png (Maybe i'll change the waifu picture later)

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.

GGUF, Exl2

ChaoticSoliloquy-4x8B

base_model: jeiku_Chaos_RP_l3_8B
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
  - source_model: jeiku_Chaos_RP_l3_8B
  - source_model: openlynn_Llama-3-Soliloquy-8B
  - source_model: Sao10K_L3-Solana-8B-v1

Models used

Vision

llama3_mmproj image/png

Prompt format: Llama 3

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.