This is a d-Matrix functional reference of the LLAMA-3.2-3B model. The reference provides the following functional configurations:
Configuration | Explanation |
---|---|
BASELINE |
a reference functionally equivalent to the original model |
BASIC |
all linear algebraic operands quantized to MXINT8-64 , and all other operations transformed to approximated kernel simulations |
Usage
Install d-Matrix Dmx_Compressor first.
pip install dmx_compressor
The following is an example model and its evaluation.
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
from dmx.compressor.modeling import DmxModel
import lm_eval
model_args = "pretrained='d-matrix/Llama-3.2-3B',trust_remote_code=True"
lm = lm_eval.api.registry.get_model("hf").create_from_arg_string(model_args, {"batch_size": 1})
# Transform the model with DMX
lm._model = DmxModel.from_torch(lm._model).to_basic_model() # Using BASIC configuration
eval_results = lm_eval.evaluate(lm, lm_eval.tasks.get_task_dict([task])) # Assign desired task, i.e. "wikitext"
- Downloads last month
- 63
Evaluation results
- perplexity (BASELINE) on Wikitextself-reported9.270
- perplexity (BASIC) on Wikitextself-reported97.378