rAIfle's picture
Update README.md
e070df8 verified
|
raw
history blame
2.41 kB
metadata
base_model:
  - jondurbin/bagel-dpo-8x7b-v0.2
  - mistralai/Mixtral-8x7B-v0.1
  - Sao10K/Sensualize-Mixtral-bf16
  - mistralai/Mixtral-8x7B-v0.1
  - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
  - mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
  - mergekit
  - merge

Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.

Branches:

  • main -- measurement.json
  • 2.25b6h -- 2.25bpw, 6bit lm_head
  • 3.5b6h -- 3.5bpw, 6bit lm_head
  • 3.7b6h -- 3.7bpw, 6bit lm_head
  • 5b6h -- 5bpw, 6bit lm_head
  • 6b6h -- 6bpw, 6bit lm_head

Requires ExllamaV2 version 0.0.12 and up.

Original model link: ycros/BagelWorldTour-8x7B

Original model README below.


BagelWorldTour

Requested by kalomaze

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mixtral-8x7B-v0.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: mistralai/Mixtral-8x7B-v0.1
models:
  - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
    parameters:
      density: 0.5
      weight: 0.1
  - model: Sao10K/Sensualize-Mixtral-bf16
    parameters:
      density: 0.5
      weight: 0.1
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1
    parameters:
      density: 0.66
      weight: 1.0
  - model: jondurbin/bagel-dpo-8x7b-v0.2
    parameters:
      density: 0.66
      weight: 0.5
merge_method: dare_ties
dtype: bfloat16