Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ACE Step1.5XL_TurboSFT_Merge_50-50 – Multi-Format (FP16 / FP8 / NVFP4)
|
| 2 |
+
|
| 3 |
+
Welcome to this repository! Here you will find a merged model of **ACE-Step 1.5 XL Turbo** and **ACE-Step 1.5 XL SFT** audio model in various quantization formats, optimized for different VRAM requirements—especially for use in **ComfyUI**.
|
| 4 |
+
|
| 5 |
+
## 📌 Original Model & Credits
|
| 6 |
+
This repository is a repack/merge based on the fantastic work of the ACE-Step team.
|
| 7 |
+
Please visit and support the original creators here:
|
| 8 |
+
👉 **[Original ACE-Step 1.5 XL Collection](https://huggingface.co/collections/ACE-Step/ace-step-15-xl)**
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## 📂 Available Formats
|
| 13 |
+
Choose the format that best fits your hardware:
|
| 14 |
+
|
| 15 |
+
### 1. FP16 (16-Bit Half Precision)
|
| 16 |
+
* **File Extension:** `.safetensors` (or marked accordingly)
|
| 17 |
+
* **Description:** The uncompressed version. Offers the absolute highest audio quality but also requires the most VRAM. Ideal for high-end GPUs.
|
| 18 |
+
|
| 19 |
+
### 2. FP8 (8-Bit Quantization)
|
| 20 |
+
* **File Extension:** `*fp8.safetensors`
|
| 21 |
+
* **Description:** The perfect sweet spot. Halves the VRAM requirement compared to FP16 while keeping the audio quality nearly identical to the original. Highly recommended for most users.
|
| 22 |
+
|
| 23 |
+
### 3. NVFP4 (4-Bit Quantization) 🚀
|
| 24 |
+
* **File Extension:** `*nvfp4.safetensors`
|
| 25 |
+
* **Description:** An extremely compressed version for minimal VRAM usage.
|
| 26 |
+
* **Important Technical Note:** Converting DiT audio models to 4-bit is highly experimental. To preserve audio quality and completely prevent the `Input tensor must be contiguous` crash in ComfyUI, critical sensitive layers (such as `bias`, `norm`, `embed_tokens`, `timbre_encoder`, `project_in`, and `quantizer`) were **not** quantized and intentionally left in `bfloat16`. Only the heavy Transformer blocks run in 4-bit. This makes the model **stable and ready to use**.
|
| 27 |
+
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
## 🛠️ Usage in ComfyUI
|
| 31 |
+
1. Download your desired format (FP16, FP8, or NVFP4).
|
| 32 |
+
2. Place the file in your ComfyUI directory under `models/diffusion_models` (or the specific folder required by your audio node).
|
| 33 |
+
3. Load the model using your standard Model Loader Node.
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
*Hosted by [Starnodes](https://huggingface.co/Starnodes/AceStepXL_Merge/)*
|