chore(card): add hardware compatibility section
Browse files
README.md
CHANGED
|
@@ -4,17 +4,14 @@ license_name: minimax-model-license
|
|
| 4 |
license_link: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICENSE
|
| 5 |
base_model: MiniMaxAI/MiniMax-M2.7
|
| 6 |
tags:
|
| 7 |
-
- rotorquant
|
| 8 |
-
- kv-cache-quantization
|
| 9 |
-
- minimax
|
| 10 |
-
- m2.7
|
| 11 |
-
- moe
|
| 12 |
-
- quantized
|
| 13 |
library_name: transformers
|
| 14 |
pipeline_tag: text-generation
|
| 15 |
-
language:
|
| 16 |
-
- en
|
| 17 |
-
inference: false
|
| 18 |
---
|
| 19 |
|
| 20 |
# MiniMax-M2.7-RotorQuant
|
|
@@ -23,6 +20,12 @@ inference: false
|
|
| 23 |
|
| 24 |
This is a **documentation repository** that explains how to combine MiniMax-M2.7's weights with RotorQuant inference-time KV cache compression. No weights are stored here — use the base model directly and apply RotorQuant via the Python package or llama.cpp fork.
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
## What is this?
|
| 27 |
|
| 28 |
KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime — so the same base weights can be used with or without compression.
|
|
|
|
| 4 |
license_link: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICENSE
|
| 5 |
base_model: MiniMaxAI/MiniMax-M2.7
|
| 6 |
tags:
|
| 7 |
+
- rotorquant
|
| 8 |
+
- kv-cache-quantization
|
| 9 |
+
- minimax
|
| 10 |
+
- m2.7
|
| 11 |
+
- moe
|
| 12 |
+
- quantized
|
| 13 |
library_name: transformers
|
| 14 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
| 15 |
---
|
| 16 |
|
| 17 |
# MiniMax-M2.7-RotorQuant
|
|
|
|
| 20 |
|
| 21 |
This is a **documentation repository** that explains how to combine MiniMax-M2.7's weights with RotorQuant inference-time KV cache compression. No weights are stored here — use the base model directly and apply RotorQuant via the Python package or llama.cpp fork.
|
| 22 |
|
| 23 |
+
## Hardware compatibility
|
| 24 |
+
|
| 25 |
+
| Device | VRAM / RAM | Recommendation |
|
| 26 |
+
| --- | --- | --- |
|
| 27 |
+
| Any host that runs the base model | baseline + runtime savings | RotorQuant/TurboQuant is a KV-cache runtime modifier; pair with any weight variant |
|
| 28 |
+
|
| 29 |
## What is this?
|
| 30 |
|
| 31 |
KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime — so the same base weights can be used with or without compression.
|