DeusImperator's picture
Upload 17 files
82220ce verified
|
raw
history blame
2.81 kB
metadata
library_name: transformers
license: apache-2.0
datasets:
  - nbeerbower/GreatFirewall-DPO
  - nbeerbower/Schule-DPO
  - nbeerbower/Purpura-DPO
  - nbeerbower/Arkhaios-DPO
  - jondurbin/truthy-dpo-v0.1
  - antiven0m/physical-reasoning-dpo
  - flammenai/Date-DPO-NoAsterisks
  - flammenai/Prude-Phi3-DPO
  - Atsunori/HelpSteer2-DPO
  - jondurbin/gutenberg-dpo-v0.1
  - nbeerbower/gutenberg2-dpo
  - nbeerbower/gutenberg-moderne-dpo
base_model:
  - nbeerbower/Dumpling-Qwen2.5-32B
quantized_by: DeusImperator

Dumpling-Qwen2.5-32B - EXL2 4.5bpw L

This is a 4.5bpw EXL2 quant of nbeerbower/Dumpling-Qwen2.5-32B

This quant was made using exllamav2-0.2.7 with default dataset and extended quantization sample length (8k instead of default 2k). It also uses -head_bits=8 and max accuracy quant for first and last layer (8bpw), all other layers of the model use normally chosen methods (method and name (4.5bpw_L) inspired by quants like Q4_K_L and Q6_K_L made by bartowski)

I tested it some some RPs (also ones over 12k context) and it seems to work. It fits nicely in 24GB VRAM on Windows with 16k fp16 context (should fit 2x that with q8 cache in exl2).

Prompt Templates

Seems to use ChatML

Original readme below


image/png

Dumpling-Qwen2.5-32B

nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B finetuned on:

Method

ORPO tuned with 8x A100 for 2 epochs.