Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Exllamav2 3.75bpw quantization of Typhon-Mixtral-v1 from Sao10K, quantized with default calibration dataset.

This bpw is the perfect size for 24GB GPUs, and can fit 32k context. Make sure to enable 4-bit cache option or you'll run into OOM errors.

Notes: This model has a good writing style imo and works well in rp. I recommend using it with either Alpaca or Mistral prompt templates in SillyTavern.


Original Card

Tyr1

GGUFS: https://huggingface.co/Sao10K/Typhon-Mixtral-v1-GGUF

exl2: https://huggingface.co/Sao10K/Typhon-Mixtral-v1-exl2

iMatrix GGUFs by InferenceIllusionist - https://huggingface.co/InferenceIllusionist/Typhon-Mixtral-v1-iMat-GGUF


Typhon - A Custom Experimental Mixtral Merge

An experimental Merge I tried for fun. Honestly did not expect it to work for Mixtral at all considering how its an MoE and the gates and all would be fucked by this custom merge.

From my testing it was able to handle SFW <--> NSFW scenarios fine, handle 1st and 3rd person roleplays fine, and seemed fairly smart.

It did pretty well for non NSFW tasks so that's a win.

Due to the nature of the merge, and Mixtral itself, it is sensitive to Prompts, does follow them well. Sampler settings are fine. i stuck with universal-light and was okay at up to 16k context during testing.


Recipe Below:

base_model: mistralai/Mixtral-8x7B-v0.1
models:
  - model: mistralai/Mixtral-8x7B-v0.1
    # no parameters necessary for base model
  - model: smelborp/MixtralOrochi8x7B
    parameters:
      weight: 0.30
      density: 0.47
  - model:  notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES
    parameters:
      weight: 0.31
      density: 0.56
  - model: Sao10K/Solstice-Mixtral-v1
    parameters: 
      weight: 0.36
      density: 0.64
  - model: Sao10K/Frostwind-Mixtral-v1
    parameters:
      weight: 0.22
      density: 0.44
  - model: KoboldAI/Mixtral-8x7B-Holodeck-v1
    parameters:
      weight: 0.21
      density: 0.36
merge_method: dare_ties
dtype: bfloat16
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for benk04/Typhon-Mixtral-v1-3.75bpw-h6-exl2

Quantized
(32)
this model